Dear Colleagues, (01)
after having followed the quality discussion for quite some time now,
I am glad to see that the majority of people seem to agree that peer
review of ontologies can provide value and submission to an "open"
system should not be too limited. (02)
Assuming one would decide to have an Open Rating System as the basis
for peer review, as was already proposed in the literature, most of
the points raised in the discussion could be accommodated. (03)
Since everyone can write reviews about the ontologies, some of the
reviewers can (and should) be what Barry would consider gatekeepers in
the restricted scenario. Namely experts that offer their opinion on
certain aspects of an ontology in the system. The way the Open Rating
System works, users can then decide which reviewer to trust and get
their ontologies ranked accordingly. (04)
Not only does this approach scale (everybody can review), it is also
very personalizable. It is up to the user to decide whether she values
the opinion of a "mere ontology user" more than the opinion of an
"ontology expert". As was already pointed out, the ontology user can
provide feedback about actually working with the ontology, while the
expert might just look at the ontology from a theoretical point of
view and determine the usefulness based on that without even
considering runtime implications. (05)
One critique often raised when proposing this kind of solution is: Who
will provide the input, who will review the ontologies and who is even
able to review ontologies. While I certainly agree that reviewing
ontologies is harder than reviewing consumer products, there seem to
be a group of people that are knowledgeable enough for Barry to
consider them part of his gatekeeping committee. If the only
contribution of the rating system were to have their process of
assessing submitted ontologies public, i.e. each expert writing a
review based on his context as philosopher, computer scientist or
scientist, I claim there is a benefit. (06)
As several of you have already mentioned, one problem with restricted
reviewing systems is that they are very vulnerable to personal
preferences, prejudices and reviewer's egos. Also controversial ideas
sometimes are not allowed because n people decide they are not worth
publishing. I would gladly appreciate a peer review system that at
least makes the reviews of papers with all submitted papers
accessible. Then I could make my own decision of whether a review
might have been biased or otherwise subjective, and whether I want to
read a controversial paper. (07)
I do not want to bore you with all the details, so in short my claims
-Open Ratings provide more transparency
-Open Ratings allow user personalization of ranking order based on
trust in reviewers
-The reviews can and should come also from the people that are now
thought of as potential gatekeepers
-This allows for a much wider exploration of the usefulness of an
ontology in different scenarios, because people can provide reviews
based on each specific setting
-The gatekeeping approach cannot scale beyond a certain number of new
ontologies per reviewing cycle (08)
Institut AIFB, Universitšt Karlsruhe
www: http://www.aifb.uni-karlsruhe.de/WBS (010)
Am 21.03.2008 um 11:40 schrieb <matthew.west@xxxxxxxxx> <matthew.west@xxxxxxxxx
> Dear Pat, John, and Barry,
> I think the problem that many have with academic review is that it
> is open to abuse and personal prejudice.
> An approach that is aimed at being more structured is Document
> This at least tries to be objective, and is designed to make being
> The approach is to measure a document against its purpose and target
> It uses a team of trained inspectors (training is simple and
> - Divide document so that (for total inspection) each part of the
> document is
> reviewed by 3 inspectors (diminishing returns after 3 in terms of
> new issues). Author is one of the inspectors.
> - Identify issues:
> - Statements that are untrue, or unclear and/or ambiguous to
> target audience
> - Super-major - show stopper
> - Major - subverts the purpose of the document
> - Minor - incorrect but no major impact
> - Editorial - grammar and spelling, badly laid out diagrams
> Review the issues, determine whether document is fit for purpose (no
> Super Major,
> low count of majors).
> This gives a rationale for rejection, and provides the basis for
> so that inclusion becomes possible. The issue list is publicly
> available so that
> people can see where the deliverable is, and whether the issues
> raised are a
> concern for them.
> This is of course the essence of reaching consensus in a
> standardization process,
> but if you are getting into any level of approval, you ARE doing
> however you choose to dress it up.
> Matthew West
> Reference Data Architecture and Standards Manager
> Shell International Petroleum Company Limited
> Registered in England and Wales
> Registered number: 621148
> Registered office: Shell Centre, London SE1 7NA, United Kingdom
> Tel: +44 20 7934 4490 Mobile: +44 7796 336538
> Email: matthew.west@xxxxxxxxx
>> -----Original Message-----
>> From: ontology-summit-bounces@xxxxxxxxxxxxxxxx
>> [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx]On Behalf Of Patrick
>> Sent: 21 March 2008 04:08
>> To: 'Ontology Summit 2008'
>> Subject: Re: [ontology-summit] [Quality] What means
>> Among the 'reviewers' is there any reason not to have an
>> expert committee
>> that can create a binary distinction of, e.g.
>> "well-structured" and "not
>> well-structured"? The imprimatur can be an alternative to absolute
>> exclusion, and still serve the legitimate concerns that Barry
>> has about
>> poorly constructed ontologies.
>> Patrick Cassidy
>> MICRA, Inc.
>> cell: 908-565-4053
>>> -----Original Message-----
>>> From: ontology-summit-bounces@xxxxxxxxxxxxxxxx
>>> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
>>> Sent: Thursday, March 20, 2008 11:56 PM
>>> To: Ontology Summit 2008
>>> Subject: Re: [ontology-summit] [Quality] What means
>>> Pat, Barry, Deborah, and Ed,
>>> Barry asked an important question that gets to the heart of
>>> the issues we have been discussing:
>>> BS> What are scientific journals for? Why do they employ a peer
>>>> review process?
>>> There are two independent issues here: reviewing and publishing.
>>> Everybody would agree that reviewing is important, but ideally,
>>> the readers/users should have the option of making their own
>>> choices based on the reviews. When publication was expensive,
>>> the publishers became gatekeepers because it was economically
>>> impractical to publish everything.
>>> But with the WWW, new options are available. Publication is
>>> almost free, and we have the luxury of decoupling the reviewing
>>> process from the gatekeeping process. Metadata enables that
>>> 1. All submissions to the OOR can be made available as soon
>>> as they are submitted.
>>> 2. The metadata associated with each submission can indicate
>>> what tests were made, what the reviewers said, and what
>>> results the users, if any, obtained.
>>> 3. Users can choose to see ontologies sorted by any criteria
>>> they want: in the order of best reviews, most thorough
>>> testing, greatest usage, greatest relevance to a particular
>>> domain, or any weighted combination.
>>> PH> This is where I part company with Barry, and indeed where I
>>>> believe that the very idea of controlling the contents of an OOR
>>>> (noting that the first O means 'open') needs to be examined very,
>>>> very carefully. Of course we would not argue that majority voting
>>>> should be used to choose scientific theories; but ontologies,
>>>> even those used by scientists, are not themselves scientific
>>> Ontologies overlap philosophy, engineering, science, and
>>> The closest model we have is the metadata registry, but new policies
>>> can and should be explored.
>>> BS>> While refrigerator manufacturers may allow democratic ranking
>>>>> to influence e.g. size and color, they would use other strategies
>>>>> e.g. in matters of thermodynamics.
>>> PH> Perhaps so: but we are here discussing matters of ontology, and
>>>> in the current state of the art, this may have more in common
>>>> with consumer product choice than with thermodynamics.
>>> That is the point I was trying to emphasize. The application
>>> developers have deeper understanding of their specific needs and
>>> problems than any general gatekeeper or committee of gatekeepers.
>>> DM> CSI, the specification writing organization for building
>>>> architecture, says quality is "a mirror of the requirements."
>>> That's a good point, which implies that different set of
>>> requirements might lead to a different ranking of the same
>>> ontologies. No gatekeeper can anticipate the requirements
>>> of all possible users.
>>> DM> Do you think the gatekeepers can help define the OOR
>>>> and set up the dynamic tests?
>>> I'd prefer to keep the reviewers and replace the gatekeepers with
>>> caretakers who have a broader role along the lines you suggested.
>>> EB> I'm thinking about bureaucrats. I think that many ontologies
>>>> (and more broadly, concept systems including thesauri,
>>>> etc.) have been and will be developed for use within the mission
>>>> areas of government agencies. There can be a vetting process to
>>>> "approve" a concept system/ontology for use within a community
>>>> of interest.
>>> That suggests a further refinement of the roles of reviewers and
>>> gatekeepers/caretakers. At the source, there are individuals and/or
>>> organizations, who develop ontologies and make them available.
>>> Among the users, there may be organizations, coalitions, or
>>> bureaucracies that evaluate the ontologies and determine which
>>> of them are best suited to their groups of users.
>>> That is another reason for replacing the gatekeepers in the OOR
>>> with caretakers. Any gatekeeping that might be useful would be
>>> better done by user groups at a level close to the applications
>>> than by any gatekeeper that is close to the ontology providers.
>>> Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
>>> Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-
>>> Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
>>> Community Files:
>>> Community Wiki: http://ontolog.cim3.net/cgi-
>>> Community Portal: http://ontolog.cim3.net/
>> Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
>> Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
>> Community Files: http://ontolog.cim3.net/file/work/
>> Community Wiki:
>> Community Portal: http://ontolog.cim3.net/
> Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
> Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/
> Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
> Community Files: http://ontolog.cim3.net/file/work/OntologySummit2008/
> Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2008
> Community Portal: http://ontolog.cim3.net/ (011)
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2008/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2008
Community Portal: http://ontolog.cim3.net/ (012)