Dear Barry, comments inline (01)
Am 21.03.2008 um 14:25 schrieb Barry Smith: (02)
> Mark Musen, below, makes a number of valuable
> points, which are made all the more interesting
> in virtue of the fact that the NCBO's BioPortal,
> an ontology repository for which Mark is
> http://www.bioontology.org/bioportal.html, is
> carrying out an experimental test of the benefits
> of democratic ranking-based approach to ontology assessment.
I agree with you that up to now no experimental study has proven that
user-based evaluation does indeed work for ontologies. We will also
carry out experiments in the context of the NeOn project, which
together with the NCBO's experiments should provide some insights. (03)
> Specifically, the BioPortal will test a thesis to
> the effect that democratic ranking based on user
> comments can 1. provide a valuable service which
> will scale as the population of ontologies grows
> and 2. allow true openness (no gatekeeping at
> all) of a repository (thus perhaps even allowing
> admission to the BioPortal of
> which is, as I understand it, a bio-ontology-like
> artifact pertaining to organisms with more than two legs).
> However, his main argument against the
> alternative (expert peer review-based) approach
> currently being tested by the OBO Foundry, has
> been addressed already in earlier postings to
> this list: the committee of peer reviewers used
> by the Foundry will in every case involve expert
> users from the specific user communities.
As was argued before, ontologies can be used in different contexts and
for different applications / use cases. What might be perfectly
valuable in one scenario might not work in the other. Therefore having
a team testing an ontology for one specific setting or according to a
specific set of rules/guidelines might not be enough. Unless you can
get the team to tackle all the scenarios and usecases an open platform
could address by application specialists commenting on usefulness in
practice, I think expert-based reviews can only be ONE of many aspects
to consider when reusing an ontology. (04)
> Mark thinks that ontologies are much more like
> refrigerators than they are like journal
> articles. I think that most of them are in fact
> still much more like collections of refrigerator
> magnets. The question is: how can we motivate
> ontology creators (and potential ontology evaluators) to do a better
By having many users provide feedback and insides, not only some
experts. I could imagine that in practice you will be rather simply
rejecting an ontology not up to expected standard than providing
detailed feedback as to why it was rejected and what has to be
improved. As you said, reviewers are busy people. So different reviews
focussing on different errors or aspects might provide valuable
feedback, similar to the whole process of open source software
development and bug-reporting. Of course you could also implement
other incentive mechanism, be it fame, money or whatever ontology
developers might desire. (05)
> This question is also not addressed by Holger
> Lewen, in another interesting and useful post
> that is also appended below. Indeed, Holger
> expresses a touching optimism to the effect that
> large bodies of intelligent user comments will
> form around the ontologies submitted to a
> potential OOR; that software will allow potential
> new users to navigate through these comments to
> help them find the answers to just the questions
> they need; and that intelligent evaluators will
> keep on submitting new comments as ever new
> collections of refrigerator magnet products
> (sorry: ontologies) come onto the market.
I guess you need optimism to try writing your PhD about that topic. I
am glad I could touch you with it! As mentioned above, of course it
has to be shown to work in that area, but there have been promising
examples of people collaborating without monetary incentives. I would
claim that large ontology development projects are not entirely
different from large open source software development problems. Also
there people collaborate out of free will, and bad contributions are
quickly removed by other users. I think one point to note is that it
is not necessary for user comments to come from intelligent users. By
means of the underlying Web of Trust, bad reviews will be filtered
according to personal preferences. As long as we do not expect the
majority of collaborators to purposely game the system, it should be
stable enough to handle content of varying quality. So even if your
gatekeeping team would be the only trustworthy body in the system, it
would still work at least as good as your gatekeeping approach alone.
But potentially it can work better by addressing more scenarios. (06)
> Why should they invest this time and effort?
> Skilled users of ontologies are, I can imagine,
> busy people. They also form a rather small
> community, with limited resources to spend e.g.
> on training teams of document inspectors as proposed (also below) by
Why do people invest time here to discuss this matter? Because some
people dedicate their time to deal with such problems. Also, if the
impact becomes high enough, you can give jobs to your best reviewers,
thereby making it worth their while. And the incentives in such a
system should not differ too much from the incentive to participate as
a reviewer in the closed gatekeeping approach. Of course, there will
be more competition from outside reviewers and one could see which
reviews the masses prefer and if the "gatekeepers'" reviews indeed
rank top. (07)
> The OBO aims to test one potential answer to this
> motivation question, at least for ontologies
> developed to aid science. This answer has the
> advantage of resting on a methodology -- the
> methodology of peer review -- that has enjoyed
> some three hundred years of success that is
> roughly co-terminous with the advance of modern science. .
Well, I cannot follow how peer review as a methodology can lead to
superior reviews. In the end it is not the process of reviewing that
makes this work (more or less), but the choice of the reviewers. There
is no reason why having a similar methodology for open review (as also
proposed for the scientific domain by some) should not work, given at
least the same reviewers. Also we have no comparison as to how science
might have involved given a more open review process for scientific
publications. So I do not know whether we can claim the success of
scientific advance to be due to the methodology of peer review or the
idea of reviewing in some way. (08)
> Crudely put: experts are motivated to review
> ontologies in their relevant domains of expertise
> because they get career-related credit for serving as reviewers.
As they could in an open review system, given it were as accepted as
the closed system. Being the top reviewer in such an open system might
even prove to provide more credibility, since not only your peers but
also a large community of users agree with your assessment. (09)
> Ontology developers are motived to create better
> ontologies because they get career-related credit
> for having their ontologies included (published)
> in what, if the peer-review process is successful
> will count as analogous to a peer-reviewed
> scientific journal. (We are working on the many
> tough problems which must be tackled to make this
> possible -- including all the problems mentioned
> by Mark and Holger, below.)
I would like to see that come true. Especially since you have so many
changes in the science ontologies, especially the big ones, that I
would not even know which version to "publish" in the repository. If I
chose to publish every new delta, will you restart the peer review
process again? Or will you just give a carte blanche? Any change could
potentially make an ontology inconsistent and unusable. So not
reviewing again would not be the solution. I know that the same
problem arises with the open review system as well, and I am currently
thinking about different scenarios to tackle it. But because of your
limited resources (reviewers), it might slow down your publishing
process considerably. (010)
> The publishing
> process we have in mind will have the incidental
> advantage that it will allow the multiple
> developers typically involved in serious ontology
> endeavors to get appropriate credit, which they
> can use to justify spending the time and effort involved.
Again, I do not see why this should not be possible in an open
reviewing system. If your ontology gets good acceptance there, it
should count at least as much. (011)
> Both reviewers and developers will be further
> motivated to participate in this process because
> they can thereby directly influence the standard
> set of ontology resources which will be available
> to them, thereby also motivating the creation of:
> related ontology-based software, useful bodies of
> data annotated in terms of these resources, etc.
Again, in my opinion also possible in an open reviewing system. (012)
> Note that I am not recommending this as an
> approach to be adopted by the OOR. It rests on
> too many features peculiar to the domain of
> science. However, if Patrick Hayes is right -
> that people like him can just as well publish
> their ontologies on the web - then this suggests
> the need for a real raison d'Ítre for the OOR,
> and I am suggesting non-trivial and evolving
> gatekeeping constraints in the cause of
> incrementally raising the quality of ontologies as one such raison
I agree that you approach will work fine in a very limited domain with
a limited number of submissions and a good choice of reviewers.
However, as you mentioned, this would be more similar to a "journal"
of ontologies rather than an open collection. I totally agree that
there can be value in having this journal like collection, especially
with people knowing its reviewing process and constraints. Still, it
might just as well be implemented in an open fashion which
additionally allows other users to provide feedback. You could easily
implement your journal like approach in an open system. Just name 5
reviewers of your choice and designate a special role to them, e.g.
expert in that domain. Given these reviewers had some sort of
consensus on the quality of the ontology, you mark it as approved by
XYZ and give it the status of a high profile submission. And you could
even see if end users agree with the assessment of your experts. You
could think of this as a way to evaluate the validity of expert-based
And so far, I am not aware of any study proving that the peer-reviewed
ontology repository provides benefits for the end user actually
employing the ontologies. Are you planning such a study on end user
satisfaction? I think it would be a really valuable experiment to let
users compare the proposed open review solution including the
gatekeepers' reviews with the closed solution. (015)
> At 01:08 AM 3/21/2008, Mark Musen wrote:
>> On Mar 20, 2008, at 8:56 PM, John F. Sowa wrote:
>>> There are two independent issues here: reviewing and publishing.
>>> Everybody would agree that reviewing is important, but ideally,
>>> the readers/users should have the option of making their own
>>> choices based on the reviews. When publication was expensive,
>>> the publishers became gatekeepers because it was economically
>>> impractical to publish everything.
>> The analogy between peer review of journal articles and peer review
>> ontologies has been applied too glibly, I believe.
>> The best reviewers of a journal article are scientists who can
>> evaluate the methods described in the paper, judge whether the data
>> presented are plausibly consistent with the methods, and assess
>> whether the authors' interpretations of the data are reasonable.
>> process is all done rather well by scientists who are experts in the
>> field and who can understand the work that is described in the paper.
>> Although the system does break down, sometimes in notorious ways, it
>> generally works rather well.
>> Ontologies are not journal articles. Although there are many
>> level distinctions that can be assessed purely by inspection (OBO-
>> Foundry criteria regarding representation language, namespaces,
>> textual definitions, and so on), the key question one wants answered
>> before using an ontology concerns whether the ontology makes the
>> distinctions about the domain being modeled. This question cannot be
>> answered by inspection of the ontology; it can be answered only by
>> application of the ontology to some set of real-world problems and
>> discovering where things break down. The people best suited for
>> making the kinds of assessment that are needed are not necessarily
>> best experts in the field, but the mid-level practitioners who
>> actually do the work. Any effective system of peer review has got to
>> capture the opinions of ontology users, and not just those of
>> subject-matter experts or of curators.
>> I think ontologies are much more like refrigerators than they are
>> journal articles. I view ontologies as artifacts. Not surprisingly,
>> I am much more interested in the opinions of people who actually use
>> refrigerators than I am of experts in thermodynamics, product
>> manufacturing, or mechanical engineering. The latter are people who
>> can inspect a particular refrigerator very carefully for surface-
>> flaws, but who may have no first-hand knowledge of what happens when
>> you actually plug it in.
>> Delivered-To: phismith@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
> Received: (qmail 20069 invoked from network); 21 Mar 2008 12:10:24
> Received: from unknown (HELO mailscan1.acsu.buffalo.edu)
> by mail3 with SMTP; 21 Mar 2008 12:10:24 -0000
> Received: (qmail 23886 invoked by uid 26149); 21 Mar 2008 12:10:23
> Delivered-To: phismith@xxxxxxxxxxx
> Received: (qmail 23679 invoked from network); 21 Mar 2008 12:10:22
> Received: from mccarthy.cim3.com (18.104.22.168)
> by front3.acsu.buffalo.edu with SMTP; 21 Mar 2008 12:10:22 -0000
> Received: from mccarthy.cim3.com (localhost.localdomain [127.0.0.1])
> by mccarthy.cim3.com (Postfix) with ESMTP id B5E07108B62;
> Fri, 21 Mar 2008 05:09:20 -0700 (PDT)
> X-Original-To: ontology-summit@xxxxxxxxxxxxxxxx
> Delivered-To: ontology-summit@xxxxxxxxxxxxxxxxx
> Received: from moutng.kundenserver.de (moutng.kundenserver.de
> by mccarthy.cim3.com (Postfix) with ESMTP id 6901A108B62
> for <ontology-summit@xxxxxxxxxxxxxxxx>;
> Fri, 21 Mar 2008 05:09:16 -0700 (PDT)
> Received: from [192.168.2.37] (p5B3BE262.dip.t-dialin.net
> by mrelayeu.kundenserver.de (node=mrelayeu4) with ESMTP
> id 0ML21M-1Jcg3c3gtd-0003pk; Fri, 21 Mar 2008 13:09:14 +0100
> Message-Id: <426E7162-BB50-46B5-9138-6A6FDA832D20@xxxxxxxxxxxxxxxxxxxxx
> From: Holger Lewen <hle@xxxxxxxxxxxxxxxxxxxxx>
> To: Ontology Summit 2008 <ontology-summit@xxxxxxxxxxxxxxxx>
> Mime-Version: 1.0 (Apple Message framework v919.2)
> Date: Fri, 21 Mar 2008 13:09:12 +0100
> X-Mailer: Apple Mail (2.919.2)
> X-Provags-ID: V01U2FsdGVkX1+R3ClbCATiOgkDQJFy18mmqWctdIOMUbJ72r9
> Subject: Re: [ontology-summit] [Quality] What means
> X-BeenThere: ontology-summit@xxxxxxxxxxxxxxxx
> X-Mailman-Version: 2.1.8
> Precedence: list
> Reply-To: Ontology Summit 2008 <ontology-summit@xxxxxxxxxxxxxxxx>
> List-Id: Ontology Summit 2008 <ontology-summit.ontolog.cim3.net>
> List-Unsubscribe: <http://ontolog.cim3.net/mailman/listinfo/ontology-summit
> List-Archive: <http://ontolog.cim3.net/forum/ontology-summit>
> List-Post: <mailto:ontology-summit@xxxxxxxxxxxxxxxx>
> List-Help: <mailto:ontology-summit-request@xxxxxxxxxxxxxxxx?subject=help
> List-Subscribe: <http://ontolog.cim3.net/mailman/listinfo/ontology-summit
> Content-Type: text/plain; charset="iso-8859-1"
> Sender: ontology-summit-bounces@xxxxxxxxxxxxxxxx
> Errors-To: ontology-summit-bounces@xxxxxxxxxxxxxxxx
> X-UB-Relay: (mccarthy.cim3.com)
> X-PM-Spam-Prob: : 7%
> Dear Colleagues,
> after having followed the quality discussion for quite some time now,
> I am glad to see that the majority of people seem to agree that peer
> review of ontologies can provide value and submission to an "open"
> system should not be too limited.
> Assuming one would decide to have an Open Rating System as the basis
> for peer review, as was already proposed in the literature, most of
> the points raised in the discussion could be accommodated.
> Since everyone can write reviews about the ontologies, some of the
> reviewers can (and should) be what Barry would consider gatekeepers in
> the restricted scenario. Namely experts that offer their opinion on
> certain aspects of an ontology in the system. The way the Open Rating
> System works, users can then decide which reviewer to trust and get
> their ontologies ranked accordingly.
> Not only does this approach scale (everybody can review), it is also
> very personalizable. It is up to the user to decide whether she values
> the opinion of a "mere ontology user" more than the opinion of an
> "ontology expert". As was already pointed out, the ontology user can
> provide feedback about actually working with the ontology, while the
> expert might just look at the ontology from a theoretical point of
> view and determine the usefulness based on that without even
> considering runtime implications.
> One critique often raised when proposing this kind of solution is: Who
> will provide the input, who will review the ontologies and who is even
> able to review ontologies. While I certainly agree that reviewing
> ontologies is harder than reviewing consumer products, there seem to
> be a group of people that are knowledgeable enough for Barry to
> consider them part of his gatekeeping committee. If the only
> contribution of the rating system were to have their process of
> assessing submitted ontologies public, i.e. each expert writing a
> review based on his context as philosopher, computer scientist or
> scientist, I claim there is a benefit.
> As several of you have already mentioned, one problem with restricted
> reviewing systems is that they are very vulnerable to personal
> preferences, prejudices and reviewer's egos. Also controversial ideas
> sometimes are not allowed because n people decide they are not worth
> publishing. I would gladly appreciate a peer review system that at
> least makes the reviews of papers with all submitted papers
> accessible. Then I could make my own decision of whether a review
> might have been biased or otherwise subjective, and whether I want to
> read a controversial paper.
> I do not want to bore you with all the details, so in short my claims
> -Open Ratings provide more transparency
> -Open Ratings allow user personalization of ranking order based on
> trust in reviewers
> -The reviews can and should come also from the people that are now
> thought of as potential gatekeepers
> -This allows for a much wider exploration of the usefulness of an
> ontology in different scenarios, because people can provide reviews
> based on each specific setting
> -The gatekeeping approach cannot scale beyond a certain number of new
> ontologies per reviewing cycle
> Holger Lewen
> Associate Researcher
> Institut AIFB, Universitšt Karlsruhe
> phone: +49-(0)721-608-6817
> email: lewen@xxxxxxxxxxxxxxxxxxxxx
> www: http://www.aifb.uni-karlsruhe.de/WBS
> Am 21.03.2008 um 11:40 schrieb
> <matthew.west@xxxxxxxxx> <matthew.west@xxxxxxxxx
>> Dear Pat, John, and Barry,
>> I think the problem that many have with academic review is that it
>> is open to abuse and personal prejudice.
>> An approach that is aimed at being more structured is Document
>> This at least tries to be objective, and is designed to make being
>> The approach is to measure a document against its purpose and target
>> It uses a team of trained inspectors (training is simple and
>> - Divide document so that (for total inspection) each part of the
>> document is
>> reviewed by 3 inspectors (diminishing returns after 3 in terms of
>> new issues). Author is one of the inspectors.
>> - Identify issues:
>> - Statements that are untrue, or unclear and/or ambiguous to
>> target audience
>> - Super-major - show stopper
>> - Major - subverts the purpose of the document
>> - Minor - incorrect but no major impact
>> - Editorial - grammar and spelling, badly laid out diagrams
>> Review the issues, determine whether document is fit for purpose (no
>> Super Major,
>> low count of majors).
>> This gives a rationale for rejection, and provides the basis for
>> so that inclusion becomes possible. The issue list is publicly
>> available so that
>> people can see where the deliverable is, and whether the issues
>> raised are a
>> concern for them.
>> This is of course the essence of reaching consensus in a
>> standardization process,
>> but if you are getting into any level of approval, you ARE doing
>> however you choose to dress it up.
>> Matthew West
>> Reference Data Architecture and Standards Manager
>> Shell International Petroleum Company Limited
>> Registered in England and Wales
>> Registered number: 621148
>> Registered office: Shell Centre, London SE1 7NA, United Kingdom
>> Tel: +44 20 7934 4490 Mobile: +44 7796 336538
>> Email: matthew.west@xxxxxxxxx
>>> -----Original Message-----
>>> From: ontology-summit-bounces@xxxxxxxxxxxxxxxx
>>> [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx]On Behalf Of
>>> Sent: 21 March 2008 04:08
>>> To: 'Ontology Summit 2008'
>>> Subject: Re: [ontology-summit] [Quality] What means
>>> Among the 'reviewers' is there any reason not to have an
>>> expert committee
>>> that can create a binary distinction of, e.g.
>>> "well-structured" and "not
>>> well-structured"? The imprimatur can be an alternative to absolute
>>> exclusion, and still serve the legitimate concerns that Barry
>>> has about
>>> poorly constructed ontologies.
>>> Patrick Cassidy
>>> MICRA, Inc.
>>> cell: 908-565-4053
>>>> -----Original Message-----
>>>> From: ontology-summit-bounces@xxxxxxxxxxxxxxxx
>>>> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
>>>> Sent: Thursday, March 20, 2008 11:56 PM
>>>> To: Ontology Summit 2008
>>>> Subject: Re: [ontology-summit] [Quality] What means
>>>> Pat, Barry, Deborah, and Ed,
>>>> Barry asked an important question that gets to the heart of
>>>> the issues we have been discussing:
>>>> BS> What are scientific journals for? Why do they employ a peer
>>>>> review process?
>>>> There are two independent issues here: reviewing and publishing.
>>>> Everybody would agree that reviewing is important, but ideally,
>>>> the readers/users should have the option of making their own
>>>> choices based on the reviews. When publication was expensive,
>>>> the publishers became gatekeepers because it was economically
>>>> impractical to publish everything.
>>>> But with the WWW, new options are available. Publication is
>>>> almost free, and we have the luxury of decoupling the reviewing
>>>> process from the gatekeeping process. Metadata enables that
>>>> 1. All submissions to the OOR can be made available as soon
>>>> as they are submitted.
>>>> 2. The metadata associated with each submission can indicate
>>>> what tests were made, what the reviewers said, and what
>>>> results the users, if any, obtained.
>>>> 3. Users can choose to see ontologies sorted by any criteria
>>>> they want: in the order of best reviews, most thorough
>>>> testing, greatest usage, greatest relevance to a particular
>>>> domain, or any weighted combination.
>>>> PH> This is where I part company with Barry, and indeed where I
>>>>> believe that the very idea of controlling the contents of an OOR
>>>>> (noting that the first O means 'open') needs to be examined very,
>>>>> very carefully. Of course we would not argue that majority voting
>>>>> should be used to choose scientific theories; but ontologies,
>>>>> even those used by scientists, are not themselves scientific
>>>> Ontologies overlap philosophy, engineering, science, and
>>>> The closest model we have is the metadata registry, but new
>>>> can and should be explored.
>>>> BS>> While refrigerator manufacturers may allow democratic ranking
>>>>>> to influence e.g. size and color, they would use other strategies
>>>>>> e.g. in matters of thermodynamics.
>>>> PH> Perhaps so: but we are here discussing matters of ontology, and
>>>>> in the current state of the art, this may have more in common
>>>>> with consumer product choice than with thermodynamics.
>>>> That is the point I was trying to emphasize. The application
>>>> developers have deeper understanding of their specific needs and
>>>> problems than any general gatekeeper or committee of gatekeepers.
>>>> DM> CSI, the specification writing organization for building
>>>>> architecture, says quality is "a mirror of the requirements."
>>>> That's a good point, which implies that different set of
>>>> requirements might lead to a different ranking of the same
>>>> ontologies. No gatekeeper can anticipate the requirements
>>>> of all possible users.
>>>> DM> Do you think the gatekeepers can help define the OOR
>>>>> and set up the dynamic tests?
>>>> I'd prefer to keep the reviewers and replace the gatekeepers with
>>>> caretakers who have a broader role along the lines you suggested.
>>>> EB> I'm thinking about bureaucrats. I think that many ontologies
>>>>> (and more broadly, concept systems including thesauri,
>>>>> etc.) have been and will be developed for use within the mission
>>>>> areas of government agencies. There can be a vetting process to
>>>>> "approve" a concept system/ontology for use within a community
>>>>> of interest.
>>>> That suggests a further refinement of the roles of reviewers and
>>>> gatekeepers/caretakers. At the source, there are individuals and/
>>>> organizations, who develop ontologies and make them available.
>>>> Among the users, there may be organizations, coalitions, or
>>>> bureaucracies that evaluate the ontologies and determine which
>>>> of them are best suited to their groups of users.
>>>> That is another reason for replacing the gatekeepers in the OOR
>>>> with caretakers. Any gatekeeping that might be useful would be
>>>> better done by user groups at a level close to the applications
>>>> than by any gatekeeper that is close to the ontology providers.
> Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
> Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/
> Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
> Community Files: http://ontolog.cim3.net/file/work/OntologySummit2008/
> Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2008
> Community Portal: http://ontolog.cim3.net/ (017)
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2008/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2008
Community Portal: http://ontolog.cim3.net/ (018)