ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Disagreements among reviewers

To: ontolog-forum@xxxxxxxxxxxxxxxx
From: John Bottoms <john@xxxxxxxxxxxxxxxxxxxx>
Date: Mon, 20 Apr 2015 05:44:22 -0400
Message-id: <5534CA76.2060904@xxxxxxxxxxxxxxxxxxxx>
Ken,    (01)

There are a few observations that can be made about this issue (papers 
on novel research). Yes, i think the statistics can be affected based on 
parameters of the process.    (02)

There are some observations that have been made along the way by 
researchers. Derrida informs the nature of science by showing how 
decomposition is done. In my engineering courses we discussed the allied 
concept of superposition but did not delve much into how to reconstruct 
systems that had been decomposed.    (03)

Thomas Kuhn's "The Nature of Scientific Revolution" discusses how 
independent scientists arrive at a consensus on a hypothesis. 
Unfortunately, this is a set dynamic discussion and he does not reveal 
how those discussions take place or what the substance of them is. That 
is understandable and this is an observation, not a criticism.    (04)

Joel Mokyr's "The Gifts of Athena" discusses how the development of 
knowledge techniques facilitated the 1st and 2nd Industrial Revolutions. 
His view is that "observers" gather facts, in the form of "useful 
information" that can be codified and made available in an open market 
for "fabricants" to assemble using "techniques" into products. This is 
one of the most formal descriptions of the process to date. And because 
Joel is an economic historian we can contrast his work with Kuhn's work 
in science. Joel's view relies on open feedback of the system to provide 
correction and improvement in the industrial system.    (05)

We also know from the Delphi method that: "... during this process 
[experts answer questionnaires in two or more rounds] the range of the 
answers will decrease and the group will converge towards the "correct" 
answer. [wikipedia]    (06)

What both of these (scientific and industrial) systems have in common is 
a set of agents working toward a goal of reducing errors. This is 
discussed in systems analysis with the observation that, The Wisdom of 
Crowds depends upon the variations in the models held by each individual 
expert. So, experts are required and the metric is the variations within 
the crowd of experts. This is known as the Diversity Prediction Theorem:
   Crowd’s error = Average Individual Error – Group Diversity    (07)

This means that more accurate experts imply more accurate predictions, 
and more diversity among the experts implies more accurate predictions. 
Larger groups (of reviewers) typically have more diversity.    (08)

Therefore, if you have a set of papers submitted to a conference what 
can we say reduces the quality of the set of papers selected?    (09)

This sounds like a notable start to the discussion. What then are the 
limiting factors for a given conference?    (010)

1. This implies that we have some metrics that recognize excellence 
among a set of comparable papers and the expert reviewers.    (011)

2. Any conference that limits the selection of papers will ultimately 
have a lower quality of submissions on average. This includes 
conferences with narrowly defined topics, those at small venues, those 
associated with difficult schedules or destinations or other constraints 
such as costs.    (012)

3. It is anything that amounts to single or few gatekeeper (reviewers) 
to papers and which reviewers have lower expert abilities to recognize 
"excellent" papers. Again, there are typically no metrics for the 
quality of reviewer's skills.    (013)

Finally, we must make some observations about creativity and 
subjectivity. The scientist or citizen scientist who makes new 
contribution is often a lone wolf. He/She is a curious researcher 
willing to work beyond the proximal zone, often will little financial 
incentive. This is because, as Kuhn points out, there is a need to 
gather consensus among a number of comparable experts and there will be 
leakage from that group as it grows. Reviewers typically have time and 
resource constraints that prevent them from investigating novel research.    (014)

There are also, particularly within the commercial arena, the temptation 
to learn from papers and then deny the submission. Finally, sometimes it 
is hard to get attention for a paper unless you are a researcher with a 
recognized company or top university. (and I prefer "Lone Wolf" over 
"crackpot")    (015)

-John Bottoms
  Concord, MA USA    (016)

On 4/19/2015 11:51 PM, kenb wrote:
> Indeed, I know about such papers.  However, I wonder whether they would
> affect the statistics very much.  Such papers (from both the great ones
> and the crackpots) may be too rare to have a significant effect.  Has this
> ever been studied?
>
> -- Ken
>
> On Sun, 19 Apr 2015, John F Sowa wrote:
>
>> On 4/19/2015 4:26 PM, kenb wrote:
>>> Given that reviewers are not compensated and that there is no
>>> assessment of the quality of the reviews, I am not at all surprised
>>> by the lack of agreement.  Given the lack of agreement and assessment
>>> of quality, it is also not surprising that "best paper" awards are
>>> largely meaningless.
>> The issues are more complex.  People with novel or unorthodox ideas
>> in any field -- science, engineering, art, etc -- are often hard to
>> distinguish from crackpots and con artists.
>>
>> There are countless stories of publishers that rejected great books,
>> movie producers that rejected great stories, and business executives
>> that rejected great inventions (xerography, for example).
>>
>> In an interdisciplinary field, it's hard for any reviewer to be
>> able to evaluate novelties in every branch.  Sometimes an author
>> who has a great idea on the boundary between fields A and B will
>> be rejected or given a mediocre evaluation by reviewers from both.
>>
>> The methods for evaluating the quality of journals and conferences
>> put pressure on the organizers to attract lots of submissions so
>> that they can have a high rejection rate.  As a result, the bar
>> for acceptance means a high average score from all reviewers.
>>
>> Some organizations recognize those issues.  One solution is
>> to *accept* any paper that receives both very strong acceptance
>> scores from one or more reviewers and very strong rejection scores
>> from others.  That is usually a sign of a controversial topic.
>>
>> John
>>    (017)


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (018)

<Prev in Thread] Current Thread [Next in Thread>