ontology-summit
[Top] [All Lists]

Re: [ontology-summit] {quality-methodology} Ontology Summit Track A: Met

To: "'Ontology Summit 2013 discussion'" <ontology-summit@xxxxxxxxxxxxxxxx>
From: "Hans Polzer" <hpolzer@xxxxxxxxxxx>
Date: Fri, 15 Feb 2013 21:50:29 -0500
Message-id: <020501ce0bf0$618f4940$24addbc0$@verizon.net>

While I can’t offer specific guidance on which evaluation metrics might be appropriate/useful in any given evaluation context, I did offer up a presentation on the dimensionality of evaluation context in the ontology summit session on 24 January. This provides a way to characterize different evaluation contexts (or ranges thereof)  in an explicit way so that one can define the conditions under which a given evaluation attribute/metric might be appropriate.

 

Hans

 

From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Amanda Vizedom
Sent: Friday, February 15, 2013 4:01 PM
To: Ontology Summit 2013 discussion
Subject: Re: [ontology-summit] {quality-methodology} Ontology Summit Track A: Metric Application (session 3)

 

Megan, and All,

 

I took the liberty of editing the subject to note the connection to the Track C session Megan mentions.

 

There is an enormously important issue here, critical to the connection between evaluation and quality/suitability.

 

Megan's question is one way of getting at it. A few other ways the issue has appeared within the Ontology Summit context: 

 

- When is a metric or evaluation dimension relevant to ontology quality? For some metrics, the answer might be "always." For most metrics, the answer might be "when any or all of a set of requirements applies, derived from the intended application type and context."

 

- What kinds of evaluation have people used, and for what purposes? When did or didn't the evaluation outcomes correlate to successful use of ontologies? 

 

In my experience, we sometimes get data from particular projects, but not enough to begin to form a well-grounded picture of patterns of relevance. I believe that this is partly because a comparatively small portion of ontology-based projects currently devote substantial, explicit thought to evaluation or to requirements identification, and use case characterization. And there isn't enough communication between projects, or across the broader community, for good cross-pollination and comparison to occur. Without this, when people do evaluation, they tend to simply do whatever they know how to do and have the resources to do, rather than thinking through alternatives and what evaluation is really meaningful and relevant to their particular ontology evaluation (development, selection, etc.) problem.

 

For the quality cross-track of last year's summit, Mike Bennett, Simon Spero and I worked on a survey (on experiences with ontology quality assurance) that was aimed at just this knowledge gap. The complexity of the question, a late start, and other factors (including, in my case and Mike's, having little experience in the hard problems of survey design) challenged us enough that although we got a survey version out the door by the end of the summit, we did not collect enough data to be meaningfully analyzed.

 

There have been suggestions to revise/refactor the survey for this year's summit focus and try again. I can't devote enough time to do this well, especially while also serving as Communique co-editor and working to get and keep the group library up to date. However, I think such a survey (still) would be very interesting and useful, and take us a step toward addressing this knowledge gap. 

 

However, if there are others (Megan?) who would be interested and willing to pitch in reviving and revising this effort, last year's material is still around.  Anyone?

 

Best,

Amanda

 

 

On Tue, Feb 12, 2013 at 1:20 PM, Megan Katsumi <katsumi@xxxxxxxxxxxxxxx> wrote:

Hi All -

I'd like to pose a question regarding the (quality) evaluation metrics that we saw in session 3, and potentially for any related presentations that we may see in the next session for this track:

Have any associated methodologies or guidelines been developed to address how these metrics should be applied in practice?  Specifically, I am interested in what occurs after the metrics' values have been obtained.  How are we to interpret the results of these approaches in order to make decisions about the ontology that we are designing / have designed?  Is the ontology developer expected to attempt to optimize these metrics?  If so, how?  Are there any use cases that demonstrate the value of this sort of application of evaluation metrics?

Regards,
Megan Katsumi



_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013
Community Portal: http://ontolog.cim3.net/wiki/

 

--- Begin Message ---
To: "Megan Katsumi" <megan.katsumi@xxxxxxxxx>, "Mary C Balboni" <Mary_C_Balboni@xxxxxxxxxxxx>, "Hans Polzer" <hpolzer@xxxxxxxxxxx>
Cc: "Ontology Summit 2013" <ontology-summit@xxxxxxxxxxxxxxxx>
From: "Peter Yim" <peter.yim@xxxxxxxx>
Date: Tue, 29 Jan 2013 14:40:32 -0500
Message-id: <CAGdcwD2DKwc2LmBQUrg2naepDwEOMzgo+bGun-sa35THYQpYKQ@xxxxxxxxxxxxxx>
Dear All,    (01)


Writing to give you and your colleagues who contributed, a huge THANK
YOU for the great talks, and for a really solid session, that sent us
off to a great start on the track discussions for OntologySummit2013.    (02)

The full proceedings of the session (slides, chat-transcript, audio
recording, etc.) are online now. They are accessible from the
"Archives" section on the session page - see:
http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2013_01_24    (03)

Please continue to engage in the Summit discourse! Join us this
Thursday (Jan-31) for the session on "Intrinsic Aspects of Ontology
Evaluation: Practice and Theory" ... RSVP!    (04)


Thanks & regards. =ppy    (05)

for and on behalf of
Track-B co-champions and Session co-chairs
Todd Schneider & Terry Longstreth    (06)

and the OntologySummit2013 Organizing Committee    (07)

http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013
--    (08)

_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/    (09)

Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013  
Community Portal: http://ontolog.cim3.net/wiki/     (010)

--- End Message ---

_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013  
Community Portal: http://ontolog.cim3.net/wiki/     (01)
<Prev in Thread] Current Thread [Next in Thread>