ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Disagreements among reviewers

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: kenb <kenb@xxxxxxxxxxx>
Date: Tue, 21 Apr 2015 22:57:10 -0400 (EDT)
Message-id: <alpine.DEB.2.02.1504212227200.11064@xxxxxxxxxx>
See comments below.    (01)

-- Ken    (02)

On Mon, 20 Apr 2015, John Bottoms wrote:    (03)

> Ken,
>
> There are a few observations that can be made about this issue (papers
> on novel research). Yes, i think the statistics can be affected based on
> parameters of the process.
>
> There are some observations that have been made along the way by
> researchers. Derrida informs the nature of science by showing how
> decomposition is done. In my engineering courses we discussed the allied
> concept of superposition but did not delve much into how to reconstruct
> systems that had been decomposed.
>
> Thomas Kuhn's "The Nature of Scientific Revolution" discusses how
> independent scientists arrive at a consensus on a hypothesis.
> Unfortunately, this is a set dynamic discussion and he does not reveal
> how those discussions take place or what the substance of them is. That
> is understandable and this is an observation, not a criticism.
>
> Joel Mokyr's "The Gifts of Athena" discusses how the development of
> knowledge techniques facilitated the 1st and 2nd Industrial Revolutions.
> His view is that "observers" gather facts, in the form of "useful
> information" that can be codified and made available in an open market
> for "fabricants" to assemble using "techniques" into products. This is
> one of the most formal descriptions of the process to date. And because
> Joel is an economic historian we can contrast his work with Kuhn's work
> in science. Joel's view relies on open feedback of the system to provide
> correction and improvement in the industrial system.
>
> We also know from the Delphi method that: "... during this process
> [experts answer questionnaires in two or more rounds] the range of the
> answers will decrease and the group will converge towards the "correct"
> answer. [wikipedia]
>
> What both of these (scientific and industrial) systems have in common is
> a set of agents working toward a goal of reducing errors. This is
> discussed in systems analysis with the observation that, The Wisdom of
> Crowds depends upon the variations in the models held by each individual
> expert. So, experts are required and the metric is the variations within
> the crowd of experts. This is known as the Diversity Prediction Theorem:
>   Crowd’s error = Average Individual Error – Group Diversity
>
> This means that more accurate experts imply more accurate predictions,
> and more diversity among the experts implies more accurate predictions.
> Larger groups (of reviewers) typically have more diversity.
>
> Therefore, if you have a set of papers submitted to a conference what
> can we say reduces the quality of the set of papers selected?
>
> This sounds like a notable start to the discussion. What then are the
> limiting factors for a given conference?
>
> 1. This implies that we have some metrics that recognize excellence
> among a set of comparable papers and the expert reviewers.    (04)

There are well-known ways for measuring impact usually using citation 
counts.  Journals and conferences are evaluated using such measures.    (05)

> 2. Any conference that limits the selection of papers will ultimately
> have a lower quality of submissions on average. This includes
> conferences with narrowly defined topics, those at small venues, those
> associated with difficult schedules or destinations or other constraints
> such as costs.    (06)

This is not obvious.  The highest ranked conference is OSDI which has a 
relatively narrow scope.  One of the lowest ranked journals is Journal of 
Research and Practice in Information Technology (rank 1134 out of 1221, 
impact 0.04) which has a scope that includes Computer Science; Information 
Systems; Computer Systems and Information Engineering; Software 
Engineering.    (07)

Ref: http://www.cs.iit.edu/~xli/CS-Conference-Journals-Impact.htm    (08)

> 3. It is anything that amounts to single or few gatekeeper (reviewers)
> to papers and which reviewers have lower expert abilities to recognize
> "excellent" papers. Again, there are typically no metrics for the
> quality of reviewer's skills.    (09)

Low quality reviews are not necessarily the result of low quality reviewer 
skill. The quality of a review is not only a function of the level of 
expertise of the reviewer but also the amount of time the reviewer devotes 
to the review.    (010)

> Finally, we must make some observations about creativity and
> subjectivity. The scientist or citizen scientist who makes new
> contribution is often a lone wolf. He/She is a curious researcher
> willing to work beyond the proximal zone, often will little financial
> incentive. This is because, as Kuhn points out, there is a need to
> gather consensus among a number of comparable experts and there will be
> leakage from that group as it grows. Reviewers typically have time and
> resource constraints that prevent them from investigating novel research.    (011)

Exactly. Reviewers are uncompensated so there is little incentive for them 
to devote time for this.    (012)

> There are also, particularly within the commercial arena, the temptation
> to learn from papers and then deny the submission. Finally, sometimes it
> is hard to get attention for a paper unless you are a researcher with a
> recognized company or top university. (and I prefer "Lone Wolf" over
> "crackpot")
>
> -John Bottoms
>  Concord, MA USA
>
> On 4/19/2015 11:51 PM, kenb wrote:
>> Indeed, I know about such papers.  However, I wonder whether they would
>> affect the statistics very much.  Such papers (from both the great ones
>> and the crackpots) may be too rare to have a significant effect.  Has this
>> ever been studied?
>>
>> -- Ken
>>
>> On Sun, 19 Apr 2015, John F Sowa wrote:
>>
>>> On 4/19/2015 4:26 PM, kenb wrote:
>>>> Given that reviewers are not compensated and that there is no
>>>> assessment of the quality of the reviews, I am not at all surprised
>>>> by the lack of agreement.  Given the lack of agreement and assessment
>>>> of quality, it is also not surprising that "best paper" awards are
>>>> largely meaningless.
>>> The issues are more complex.  People with novel or unorthodox ideas
>>> in any field -- science, engineering, art, etc -- are often hard to
>>> distinguish from crackpots and con artists.
>>>
>>> There are countless stories of publishers that rejected great books,
>>> movie producers that rejected great stories, and business executives
>>> that rejected great inventions (xerography, for example).
>>>
>>> In an interdisciplinary field, it's hard for any reviewer to be
>>> able to evaluate novelties in every branch.  Sometimes an author
>>> who has a great idea on the boundary between fields A and B will
>>> be rejected or given a mediocre evaluation by reviewers from both.
>>>
>>> The methods for evaluating the quality of journals and conferences
>>> put pressure on the organizers to attract lots of submissions so
>>> that they can have a high rejection rate.  As a result, the bar
>>> for acceptance means a high average score from all reviewers.
>>>
>>> Some organizations recognize those issues.  One solution is
>>> to *accept* any paper that receives both very strong acceptance
>>> scores from one or more reviewers and very strong rejection scores
>>> from others.  That is usually a sign of a controversial topic.
>>>
>>> John
>>>
>
>
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
>
>    (013)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>