Referees with different backgrounds often disagree about papers
submitted to journals and conferences. See below for some excerpts
from a discussion about the reviewing process for NIPS (Neural
Information Processing Systems). (01)
As the NIPS web site says, they bring together "machine learning and
computational neuroscience": https://nips.cc/Conferences/current (02)
With that combination of reviewers and participants, disagreements
are inevitable. But other conferences with interdisciplinary
participants (AI, for example) have similar problems. (03)
The result of accepting papers with the highest average scores is
that the so-called "best papers" are boring. They're inevitably
papers that nobody objects to. (04)
John
______________________________________________________________________ (05)
http://cacm.acm.org/blogs/blog-cacm/181996-the-nips-experiment/fulltext (06)
The 26% disagreement rate presented at the NIPS conference understates
the meaning in my opinion, given the 22% acceptance rate. The immediate
implication is that between half and two-thirds of papers accepted at
NIPS would have been rejected if reviewed a second time. For analysis
details and discussion about that, see here. (07)
Let’s give P (reject in 2nd review | accept 1st review) a name:
arbitrariness. For NIPS 2014, arbitrariness was ~60%. Given such a stark
number, the primary question is "what does it mean?" ... (08)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J (09)
|