At 10:10 AM 3/21/2008, Holger Lewen wrote:
>Dear Barry, comments inline
>
>Am 21.03.2008 um 14:25 schrieb Barry Smith:
>
> > Mark Musen, below, makes a number of valuable
> > points, which are made all the more interesting
> > in virtue of the fact that the NCBO's BioPortal,
> > an ontology repository for which Mark is
> > responsible,
> > http://www.bioontology.org/bioportal.html, is
> > carrying out an experimental test of the benefits
> > of democratic ranking-based approach to ontology assessment.
> >
>I agree with you that up to now no experimental study has proven that
>user-based evaluation does indeed work for ontologies. We will also
>carry out experiments in the context of the NeOn project, which
>together with the NCBO's experiments should provide some insights.
>
>
> > Specifically, the BioPortal will test a thesis to
> > the effect that democratic ranking based on user
> > comments can 1. provide a valuable service which
> > will scale as the population of ontologies grows
> > and 2. allow true openness (no gatekeeping at
> > all) of a repository (thus perhaps even allowing
> > admission to the BioPortal of
> > http://www.schemaweb.info/schema/SchemaDetails.aspx?id=163,
> > which is, as I understand it, a bio-ontology-like
> > artifact pertaining to organisms with more than two legs).
> >
> > However, his main argument against the
> > alternative (expert peer review-based) approach
> > currently being tested by the OBO Foundry, has
> > been addressed already in earlier postings to
> > this list: the committee of peer reviewers used
> > by the Foundry will in every case involve expert
> > users from the specific user communities.
> >
>As was argued before, ontologies can be used in different contexts and
>for different applications / use cases. What might be perfectly
>valuable in one scenario might not work in the other. Therefore having
>a team testing an ontology for one specific setting or according to a
>specific set of rules/guidelines might not be enough. Unless you can
>get the team to tackle all the scenarios and usecases an open platform
>could address by application specialists commenting on usefulness in
>practice, I think expert-based reviews can only be ONE of many aspects
>to consider when reusing an ontology. (01)
But a valuable one, foresooth! (02)
> > Mark thinks that ontologies are much more like
> > refrigerators than they are like journal
> > articles. I think that most of them are in fact
> > still much more like collections of refrigerator
> > magnets. The question is: how can we motivate
> > ontology creators (and potential ontology evaluators) to do a better
> > job?
> >
>By having many users provide feedback and insides, not only some
>experts. I could imagine that in practice you will be rather simply
>rejecting an ontology not up to expected standard than providing
>detailed feedback as to why it was rejected and what has to be
>improved. As you said, reviewers are busy people. So different reviews
>focussing on different errors or aspects might provide valuable
>feedback, similar to the whole process of open source software
>development and bug-reporting. Of course you could also implement
>other incentive mechanism, be it fame, money or whatever ontology
>developers might desire.
>
> > This question is also not addressed by Holger
> > Lewen, in another interesting and useful post
> > that is also appended below. Indeed, Holger
> > expresses a touching optimism to the effect that
> > large bodies of intelligent user comments will
> > form around the ontologies submitted to a
> > potential OOR; that software will allow potential
> > new users to navigate through these comments to
> > help them find the answers to just the questions
> > they need; and that intelligent evaluators will
> > keep on submitting new comments as ever new
> > collections of refrigerator magnet products
> > (sorry: ontologies) come onto the market.
> >
>I guess you need optimism to try writing your PhD about that topic. I
>am glad I could touch you with it! As mentioned above, of course it
>has to be shown to work in that area, but there have been promising
>examples of people collaborating without monetary incentives. I would
>claim that large ontology development projects are not entirely
>different from large open source software development problems. Also
>there people collaborate out of free will, and bad contributions are
>quickly removed by other users. I think one point to note is that it
>is not necessary for user comments to come from intelligent users. By
>means of the underlying Web of Trust, bad reviews will be filtered
>according to personal preferences. As long as we do not expect the
>majority of collaborators to purposely game the system, it should be
>stable enough to handle content of varying quality. So even if your
>gatekeeping team would be the only trustworthy body in the system, it
>would still work at least as good as your gatekeeping approach alone.
>But potentially it can work better by addressing more scenarios. (03)
Ah, such touching optimism! (04)
> > Why should they invest this time and effort?
> > Skilled users of ontologies are, I can imagine,
> > busy people. They also form a rather small
> > community, with limited resources to spend e.g.
> > on training teams of document inspectors as proposed (also below) by
> > Matthew.
> >
>Why do people invest time here to discuss this matter? Because some
>people dedicate their time to deal with such problems. Also, if the
>impact becomes high enough, you can give jobs to your best reviewers,
>thereby making it worth their while. (05)
Try taking this to a university Dean: (06)
Give this man a job; he posted 4000 comments to
the NeON ontology ranking system just last week
alone, and his reviews came top in 79% of cases! (07)
Try convincing a peer review commitee of a
funding agency to accept this as part of the
argument why a project should be funded. (08)
>And the incentives in such a
>system should not differ too much from the incentive to participate as
>a reviewer in the closed gatekeeping approach. Of course, there will
>be more competition from outside reviewers and one could see which
>reviews the masses prefer and if the "gatekeepers'" reviews indeed
>rank top.
>
>
> > The OBO aims to test one potential answer to this
> > motivation question, at least for ontologies
> > developed to aid science. This answer has the
> > advantage of resting on a methodology -- the
> > methodology of peer review -- that has enjoyed
> > some three hundred years of success that is
> > roughly co-terminous with the advance of modern science. .
> >
>Well, I cannot follow how peer review as a methodology can lead to
>superior reviews. In the end it is not the process of reviewing that
>makes this work (more or less), but the choice of the reviewers. There
>is no reason why having a similar methodology for open review (as also
>proposed for the scientific domain by some) should not work, given at
>least the same reviewers. Also we have no comparison as to how science
>might have involved given a more open review process for scientific
>publications. So I do not know whether we can claim the success of
>scientific advance to be due to the methodology of peer review or the
>idea of reviewing in some way.
> (09)
By the same token we could prove that the process
of using standard arithmetic does not contribute
to making physics (engineering, etc.) work,
because there might have been another,
non-standard arithmetic, perhaps developed on the
basis of democratic vote on how addition works,
which would have done much better. (010)
> > Crudely put: experts are motivated to review
> > ontologies in their relevant domains of expertise
> > because they get career-related credit for serving as reviewers.
>As they could in an open review system, given it were as accepted as
>the closed system. Being the top reviewer in such an open system might
>even prove to provide more credibility, since not only your peers but
>also a large community of users agree with your assessment. (011)
People like this, who have some knowledge of
biology, would indeed almost certainly be invited
to join the OBO Foundry review process.
> >
> >
> > Ontology developers are motived to create better
> > ontologies because they get career-related credit
> > for having their ontologies included (published)
> > in what, if the peer-review process is successful
> > will count as analogous to a peer-reviewed
> > scientific journal. (We are working on the many
> > tough problems which must be tackled to make this
> > possible -- including all the problems mentioned
> > by Mark and Holger, below.)
>I would like to see that come true. Especially since you have so many
>changes in the science ontologies, especially the big ones, that I
>would not even know which version to "publish" in the repository.
>If I
>chose to publish every new delta, will you restart the peer review
>process again? Or will you just give a carte blanche? Any change could
>potentially make an ontology inconsistent and unusable. So not
>reviewing again would not be the solution. I know that the same
>problem arises with the open review system as well, and I am currently
>thinking about different scenarios to tackle it. But because of your
>limited resources (reviewers), it might slow down your publishing
>process considerably.
>
> (012)
This is one of the problems where we are, I
believe, close to a good understanding of the
issues involved. The Gene Ontology has been
practicing good versioning policies for several
years and it is updated on a nightly basis. We
have no intention to review it every morning. Our
process will, I believe, provide a way to tackle
this problem that is still manageable. The open
process favored by you and Mark would, I believe,
face more formidable difficulties in managing two
sets of moving targets (ontologies, ranking clouds). (013)
> > The publishing
> > process we have in mind will have the incidental
> > advantage that it will allow the multiple
> > developers typically involved in serious ontology
> > endeavors to get appropriate credit, which they
> > can use to justify spending the time and effort involved.
> >
>Again, I do not see why this should not be possible in an open
>reviewing system. If your ontology gets good acceptance there, it
>should count at least as much. (014)
See above, re: getting a job. Do we wish ontology
to become one day a professional endeavor,
involving such things as levels of expertise? Or
do we wish ontology to remain a cottage industry? (015)
> > Both reviewers and developers will be further
> > motivated to participate in this process because
> > they can thereby directly influence the standard
> > set of ontology resources which will be available
> > to them, thereby also motivating the creation of:
> > related ontology-based software, useful bodies of
> > data annotated in terms of these resources, etc.
> >
>Again, in my opinion also possible in an open reviewing system.
>
>
> > Note that I am not recommending this as an
> > approach to be adopted by the OOR. It rests on
> > too many features peculiar to the domain of
> > science. However, if Patrick Hayes is right -
> > that people like him can just as well publish
> > their ontologies on the web - then this suggests
> > the need for a real raison d'être for the OOR,
> > and I am suggesting non-trivial and evolving
> > gatekeeping constraints in the cause of
> > incrementally raising the quality of ontologies as one such raison
> > d'être.
> >
>
>I agree that you approach will work fine in a very limited domain with
>a limited number of submissions and a good choice of reviewers.
>However, as you mentioned, this would be more similar to a "journal"
>of ontologies rather than an open collection. (016)
Good journals allow anyone to submit.
Also the system of journal publishing which we
take as our starting-point provides a rather
elegant way to divide up the potentially
unlimited domain that is available for ontology
development into limited disciplines and subdisciplines. (017)
> I totally agree that
>there can be value in having this journal like collection, especially
>with people knowing its reviewing process and constraints. Still, it
>might just as well be implemented in an open fashion which
>additionally allows other users to provide feedback. (018)
We provide for constant feedback of all types --
ontologies work well, in scientific research at
least, to the degree that they have large numbers
of users who trust them -- hence consensus is
important. We receive huge amounts of feedback of
all types. We could, if we wish, use this
feedback to rank the ontologies (4,000 emails
alone last week on the
<http://www.schemaweb.info/schema/SchemaDetails.aspx?id=292>Bio-Zen
Ontology, so it must be good). (019)
>You could easily
>implement your journal like approach in an open system. Just name 5
>reviewers of your choice and designate a special role to them, e.g.
>expert in that domain. Given these reviewers had some sort of
>consensus on the quality of the ontology, you mark it as approved by
>XYZ and give it the status of a high profile submission. And you could
>even see if end users agree with the assessment of your experts. You
>could think of this as a way to evaluate the validity of expert-based
>review. (020)
We have been thinking carefully about such
approaches for some time, and what we end up with
will certainly involve aspects of what you
suggest. Some (not me) might worry, however, that
what you suggest still bears certain traces of elitism. (021)
>And so far, I am not aware of any study proving that the peer-reviewed
>ontology repository provides benefits for the end user actually
>employing the ontologies. Are you planning such a study on end user
>satisfaction? I think it would be a really valuable experiment to let
>users compare the proposed open review solution including the
>gatekeepers' reviews with the closed solution. (022)
The GO was built using a subset of the principles
now formalized by the OBO Foundry (of which the
GO forms the central part). The GO ranks 5th on
google; the 2nd most highly ranked ontology (GeoNames), is ranked 16th.
We have documented some of the qualitative
benefits in http://www.nature.com/nbt/journal/v25/n11/pdf/nbt1346.pdf
Some of us are engaging in research on
quantitative metrics to measure these benefits
e.g. at http://www.org.buffalo.edu/RTU/papers/CeustersFois2006.pdf (023)
BS (024)
>
> >
> > At 01:08 AM 3/21/2008, Mark Musen wrote:
> >> On Mar 20, 2008, at 8:56 PM, John F. Sowa wrote:
> >>> There are two independent issues here: reviewing and publishing.
> >>> Everybody would agree that reviewing is important, but ideally,
> >>> the readers/users should have the option of making their own
> >>> choices based on the reviews. When publication was expensive,
> >>> the publishers became gatekeepers because it was economically
> >>> impractical to publish everything.
> >>
> >> The analogy between peer review of journal articles and peer review
> >> of
> >> ontologies has been applied too glibly, I believe.
> >>
> >> The best reviewers of a journal article are scientists who can
> >> evaluate the methods described in the paper, judge whether the data
> >> presented are plausibly consistent with the methods, and assess
> >> whether the authors' interpretations of the data are reasonable.
> >> This
> >> process is all done rather well by scientists who are experts in the
> >> field and who can understand the work that is described in the paper.
> >> Although the system does break down, sometimes in notorious ways, it
> >> generally works rather well.
> >>
> >> Ontologies are not journal articles. Although there are many
> >> surface-
> >> level distinctions that can be assessed purely by inspection (OBO-
> >> Foundry criteria regarding representation language, namespaces,
> >> textual definitions, and so on), the key question one wants answered
> >> before using an ontology concerns whether the ontology makes the
> >> right
> >> distinctions about the domain being modeled. This question cannot be
> >> answered by inspection of the ontology; it can be answered only by
> >> application of the ontology to some set of real-world problems and
> >> discovering where things break down. The people best suited for
> >> making the kinds of assessment that are needed are not necessarily
> >> the
> >> best experts in the field, but the mid-level practitioners who
> >> actually do the work. Any effective system of peer review has got to
> >> capture the opinions of ontology users, and not just those of
> >> renowned
> >> subject-matter experts or of curators.
> >>
> >> I think ontologies are much more like refrigerators than they are
> >> like
> >> journal articles. I view ontologies as artifacts. Not surprisingly,
> >> I am much more interested in the opinions of people who actually use
> >> refrigerators than I am of experts in thermodynamics, product
> >> manufacturing, or mechanical engineering. The latter are people who
> >> can inspect a particular refrigerator very carefully for surface-
> >> level
> >> flaws, but who may have no first-hand knowledge of what happens when
> >> you actually plug it in.
> >>
> >> Mark
> >> Delivered-To: phismith@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
> > Received: (qmail 20069 invoked from network); 21 Mar 2008 12:10:24
> > -0000
> > Received: from unknown (HELO mailscan1.acsu.buffalo.edu)
> > (128.205.6.133)
> > by mail3 with SMTP; 21 Mar 2008 12:10:24 -0000
> > Received: (qmail 23886 invoked by uid 26149); 21 Mar 2008 12:10:23
> > -0000
> > Delivered-To: phismith@xxxxxxxxxxx
> > Received: (qmail 23679 invoked from network); 21 Mar 2008 12:10:22
> > -0000
> > Received: from mccarthy.cim3.com (64.62.192.10)
> > by front3.acsu.buffalo.edu with SMTP; 21 Mar 2008 12:10:22 -0000
> > Received: from mccarthy.cim3.com (localhost.localdomain [127.0.0.1])
> > by mccarthy.cim3.com (Postfix) with ESMTP id B5E07108B62;
> > Fri, 21 Mar 2008 05:09:20 -0700 (PDT)
> > X-Original-To: ontology-summit@xxxxxxxxxxxxxxxx
> > Delivered-To: ontology-summit@xxxxxxxxxxxxxxxxx
> > Received: from moutng.kundenserver.de (moutng.kundenserver.de
> > [212.227.126.187])
> > by mccarthy.cim3.com (Postfix) with ESMTP id 6901A108B62
> > for <ontology-summit@xxxxxxxxxxxxxxxx>;
> > Fri, 21 Mar 2008 05:09:16 -0700 (PDT)
> > Received: from [192.168.2.37] (p5B3BE262.dip.t-dialin.net
> > [91.59.226.98])
> > by mrelayeu.kundenserver.de (node=mrelayeu4) with ESMTP
> > (Nemesis)
> > id 0ML21M-1Jcg3c3gtd-0003pk; Fri, 21 Mar 2008 13:09:14 +0100
> > Message-Id: <426E7162-BB50-46B5-9138-6A6FDA832D20@xxxxxxxxxxxxxxxxxxxxx
> > >
> > From: Holger Lewen <hle@xxxxxxxxxxxxxxxxxxxxx>
> > To: Ontology Summit 2008 <ontology-summit@xxxxxxxxxxxxxxxx>
> > In-Reply-To:
> > <808637A57BC3454FA660801A3995FA8F06A2D3FC@lonsc-
> > s-031.europe.shell.com>
> > Mime-Version: 1.0 (Apple Message framework v919.2)
> > Date: Fri, 21 Mar 2008 13:09:12 +0100
> > References:
> > <808637A57BC3454FA660801A3995FA8F06A2D3FC@lonsc-
> > s-031.europe.shell.com>
> > X-Mailer: Apple Mail (2.919.2)
> > X-Provags-ID: V01U2FsdGVkX1+R3ClbCATiOgkDQJFy18mmqWctdIOMUbJ72r9
> > QE6P9VrSZh2z7Ulgng1GZbcetJRrDotabFTYuo9bIcMu02snoG
> > dPmpRMf7I56QBJktSJaMGlJEBXSfzrQ
> > Subject: Re: [ontology-summit] [Quality] What means
> > X-BeenThere: ontology-summit@xxxxxxxxxxxxxxxx
> > X-Mailman-Version: 2.1.8
> > Precedence: list
> > Reply-To: Ontology Summit 2008 <ontology-summit@xxxxxxxxxxxxxxxx>
> > List-Id: Ontology Summit 2008 <ontology-summit.ontolog.cim3.net>
> > List-Unsubscribe:
> <http://ontolog.cim3.net/mailman/listinfo/ontology-summit > >,
> >
> <mailto:ontology-summit-request@xxxxxxxxxxxxxxxx?subject=unsubscribe > >
> > List-Archive: <http://ontolog.cim3.net/forum/ontology-summit>
> > List-Post: <mailto:ontology-summit@xxxxxxxxxxxxxxxx>
> > List-Help:
> <mailto:ontology-summit-request@xxxxxxxxxxxxxxxx?subject=help > >
> > List-Subscribe:
> <http://ontolog.cim3.net/mailman/listinfo/ontology-summit > >,
> >
> <mailto:ontology-summit-request@xxxxxxxxxxxxxxxx?subject=subscribe > >
> > Content-Type: text/plain; charset="iso-8859-1"
> > Sender: ontology-summit-bounces@xxxxxxxxxxxxxxxx
> > Errors-To: ontology-summit-bounces@xxxxxxxxxxxxxxxx
> > X-UB-Relay: (mccarthy.cim3.com)
> > X-PM-Spam-Prob: : 7%
> >
> > Dear Colleagues,
> >
> > after having followed the quality discussion for quite some time now,
> > I am glad to see that the majority of people seem to agree that peer
> > review of ontologies can provide value and submission to an "open"
> > system should not be too limited.
> >
> > Assuming one would decide to have an Open Rating System as the basis
> > for peer review, as was already proposed in the literature, most of
> > the points raised in the discussion could be accommodated.
> >
> > Since everyone can write reviews about the ontologies, some of the
> > reviewers can (and should) be what Barry would consider gatekeepers in
> > the restricted scenario. Namely experts that offer their opinion on
> > certain aspects of an ontology in the system. The way the Open Rating
> > System works, users can then decide which reviewer to trust and get
> > their ontologies ranked accordingly.
> >
> > Not only does this approach scale (everybody can review), it is also
> > very personalizable. It is up to the user to decide whether she values
> > the opinion of a "mere ontology user" more than the opinion of an
> > "ontology expert". As was already pointed out, the ontology user can
> > provide feedback about actually working with the ontology, while the
> > expert might just look at the ontology from a theoretical point of
> > view and determine the usefulness based on that without even
> > considering runtime implications.
> >
> > One critique often raised when proposing this kind of solution is: Who
> > will provide the input, who will review the ontologies and who is even
> > able to review ontologies. While I certainly agree that reviewing
> > ontologies is harder than reviewing consumer products, there seem to
> > be a group of people that are knowledgeable enough for Barry to
> > consider them part of his gatekeeping committee. If the only
> > contribution of the rating system were to have their process of
> > assessing submitted ontologies public, i.e. each expert writing a
> > review based on his context as philosopher, computer scientist or
> > scientist, I claim there is a benefit.
> >
> > As several of you have already mentioned, one problem with restricted
> > reviewing systems is that they are very vulnerable to personal
> > preferences, prejudices and reviewer's egos. Also controversial ideas
> > sometimes are not allowed because n people decide they are not worth
> > publishing. I would gladly appreciate a peer review system that at
> > least makes the reviews of papers with all submitted papers
> > accessible. Then I could make my own decision of whether a review
> > might have been biased or otherwise subjective, and whether I want to
> > read a controversial paper.
> >
> > I do not want to bore you with all the details, so in short my claims
> > are:
> > -Open Ratings provide more transparency
> > -Open Ratings allow user personalization of ranking order based on
> > trust in reviewers
> > -The reviews can and should come also from the people that are now
> > thought of as potential gatekeepers
> > -This allows for a much wider exploration of the usefulness of an
> > ontology in different scenarios, because people can provide reviews
> > based on each specific setting
> > -The gatekeeping approach cannot scale beyond a certain number of new
> > ontologies per reviewing cycle
> >
> > Regards,
> >
> > Holger Lewen
> > Associate Researcher
> > Institut AIFB, Universität Karlsruhe
> > phone: +49-(0)721-608-6817
> > email: lewen@xxxxxxxxxxxxxxxxxxxxx
> > www: http://www.aifb.uni-karlsruhe.de/WBS
> >
> >
> >
> >
> > Am 21.03.2008 um 11:40 schrieb
> > <matthew.west@xxxxxxxxx> <matthew.west@xxxxxxxxx
> >> :
> >> Dear Pat, John, and Barry,
> >>
> >> I think the problem that many have with academic review is that it
> >> is open to abuse and personal prejudice.
> >>
> >> An approach that is aimed at being more structured is Document
> >> Inspection.
> >> This at least tries to be objective, and is designed to make being
> >> subjective
> >> harder.
> >>
> >> The approach is to measure a document against its purpose and target
> >> audience.
> >> It uses a team of trained inspectors (training is simple and
> >> straightforward)
> >> - Divide document so that (for total inspection) each part of the
> >> document is
> >> reviewed by 3 inspectors (diminishing returns after 3 in terms of
> >> identifying
> >> new issues). Author is one of the inspectors.
> >> - Identify issues:
> >> - Statements that are untrue, or unclear and/or ambiguous to
> >> target audience
> >> - Super-major - show stopper
> >> - Major - subverts the purpose of the document
> >> - Minor - incorrect but no major impact
> >> - Editorial - grammar and spelling, badly laid out diagrams
> >> Review the issues, determine whether document is fit for purpose (no
> >> Super Major,
> >> low count of majors).
> >>
> >> This gives a rationale for rejection, and provides the basis for
> >> improvement
> >> so that inclusion becomes possible. The issue list is publicly
> >> available so that
> >> people can see where the deliverable is, and whether the issues
> >> raised are a
> >> concern for them.
> >>
> >> This is of course the essence of reaching consensus in a
> >> standardization process,
> >> but if you are getting into any level of approval, you ARE doing
> >> standardization,
> >> however you choose to dress it up.
> >>
> >> Regards
> >>
> >> Matthew West
> >> Reference Data Architecture and Standards Manager
> >> Shell International Petroleum Company Limited
> >> Registered in England and Wales
> >> Registered number: 621148
> >> Registered office: Shell Centre, London SE1 7NA, United Kingdom
> >>
> >> Tel: +44 20 7934 4490 Mobile: +44 7796 336538
> >> Email: matthew.west@xxxxxxxxx
> >> http://www.shell.com
> >> http://www.matthew-west.org.uk/
> >>
> >>
> >>
> >>> -----Original Message-----
> >>> From: ontology-summit-bounces@xxxxxxxxxxxxxxxx
> >>> [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx]On Behalf Of
> >>> Patrick
> >>> Cassidy
> >>> Sent: 21 March 2008 04:08
> >>> To: 'Ontology Summit 2008'
> >>> Subject: Re: [ontology-summit] [Quality] What means
> >>>
> >>>
> >>> John,
> >>> Among the 'reviewers' is there any reason not to have an
> >>> expert committee
> >>> that can create a binary distinction of, e.g.
> >>> "well-structured" and "not
> >>> well-structured"? The imprimatur can be an alternative to absolute
> >>> exclusion, and still serve the legitimate concerns that Barry
> >>> has about
> >>> poorly constructed ontologies.
> >>>
> >>> Pat
> >>>
> >>> Patrick Cassidy
> >>> MICRA, Inc.
> >>> 908-561-3416
> >>> cell: 908-565-4053
> >>> cassidy@xxxxxxxxx
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: ontology-summit-bounces@xxxxxxxxxxxxxxxx
> >>> [mailto:ontology-summit-
> >>>> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
> >>>> Sent: Thursday, March 20, 2008 11:56 PM
> >>>> To: Ontology Summit 2008
> >>>> Subject: Re: [ontology-summit] [Quality] What means
> >>>>
> >>>> Pat, Barry, Deborah, and Ed,
> >>>>
> >>>> Barry asked an important question that gets to the heart of
> >>>> the issues we have been discussing:
> >>>>
> >>>> BS> What are scientific journals for? Why do they employ a peer
> >>>>> review process?
> >>>>
> >>>> There are two independent issues here: reviewing and publishing.
> >>>> Everybody would agree that reviewing is important, but ideally,
> >>>> the readers/users should have the option of making their own
> >>>> choices based on the reviews. When publication was expensive,
> >>>> the publishers became gatekeepers because it was economically
> >>>> impractical to publish everything.
> >>>>
> >>>> But with the WWW, new options are available. Publication is
> >>>> almost free, and we have the luxury of decoupling the reviewing
> >>>> process from the gatekeeping process. Metadata enables that
> >>>> decoupling:
> >>>>
> >>>> 1. All submissions to the OOR can be made available as soon
> >>>> as they are submitted.
> >>>>
> >>>> 2. The metadata associated with each submission can indicate
> >>>> what tests were made, what the reviewers said, and what
> >>>> results the users, if any, obtained.
> >>>>
> >>>> 3. Users can choose to see ontologies sorted by any criteria
> >>>> they want: in the order of best reviews, most thorough
> >>>> testing, greatest usage, greatest relevance to a particular
> >>>> domain, or any weighted combination.
> >>>>
> >>>> PH> This is where I part company with Barry, and indeed where I
> >>>>> believe that the very idea of controlling the contents of an OOR
> >>>>> (noting that the first O means 'open') needs to be examined very,
> >>>>> very carefully. Of course we would not argue that majority voting
> >>>>> should be used to choose scientific theories; but ontologies,
> >>>>> even those used by scientists, are not themselves scientific
> >>>>> theories.
> >>>>
> >>>> Ontologies overlap philosophy, engineering, science, and
> >>> mathematics.
> >>>> The closest model we have is the metadata registry, but new
> >>>> policies
> >>>> can and should be explored.
> >>>>
> >>>> BS>> While refrigerator manufacturers may allow democratic ranking
> >>>>>> to influence e.g. size and color, they would use other strategies
> >>>>>> e.g. in matters of thermodynamics.
> >>>>
> >>>> PH> Perhaps so: but we are here discussing matters of ontology, and
> >>>>> in the current state of the art, this may have more in common
> >>>>> with consumer product choice than with thermodynamics.
> >>>>
> >>>> That is the point I was trying to emphasize. The application
> >>>> developers have deeper understanding of their specific needs and
> >>>> problems than any general gatekeeper or committee of gatekeepers.
> >>>>
> >>>> DM> CSI, the specification writing organization for building
> >>>>> architecture, says quality is "a mirror of the requirements."
> >>>>
> >>>> That's a good point, which implies that different set of
> >>>> requirements might lead to a different ranking of the same
> >>>> ontologies. No gatekeeper can anticipate the requirements
> >>>> of all possible users.
> >>>>
> >>>> DM> Do you think the gatekeepers can help define the OOR
> >>> requirements
> >>>>> and set up the dynamic tests?
> >>>>
> >>>> I'd prefer to keep the reviewers and replace the gatekeepers with
> >>>> caretakers who have a broader role along the lines you suggested.
> >>>>
> >>>> EB> I'm thinking about bureaucrats. I think that many ontologies
> >>>>> (and more broadly, concept systems including thesauri,
> >>> taxonomies,
> >>>>> etc.) have been and will be developed for use within the mission
> >>>>> areas of government agencies. There can be a vetting process to
> >>>>> "approve" a concept system/ontology for use within a community
> >>>>> of interest.
> >>>>
> >>>> That suggests a further refinement of the roles of reviewers and
> >>>> gatekeepers/caretakers. At the source, there are individuals and/
> >>>> or
> >>>> organizations, who develop ontologies and make them available.
> >>>> Among the users, there may be organizations, coalitions, or
> >>>> bureaucracies that evaluate the ontologies and determine which
> >>>> of them are best suited to their groups of users.
> >>>>
> >>>> That is another reason for replacing the gatekeepers in the OOR
> >>>> with caretakers. Any gatekeeping that might be useful would be
> >>>> better done by user groups at a level close to the applications
> >>>> than by any gatekeeper that is close to the ontology providers.
> >>>>
> >>>> John
> >
> >
> >
> > _________________________________________________________________
> > Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
> > Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/
> > Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
> > Community Files: http://ontolog.cim3.net/file/work/OntologySummit2008/
> > Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2008
> > Community Portal: http://ontolog.cim3.net/
>
>
>_________________________________________________________________
>Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
>Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/
>Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
>Community Files: http://ontolog.cim3.net/file/work/OntologySummit2008/
>Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2008
>Community Portal: http://ontolog.cim3.net/ (025)
_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2008/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2008
Community Portal: http://ontolog.cim3.net/ (026)
|