ontology-summit
[Top] [All Lists]

Re: [ontology-summit] {quality-methodology} Ontology Summit Track A: Met

To: Ontology Summit 2013 discussion <ontology-summit@xxxxxxxxxxxxxxxx>
From: "Obrst, Leo J." <lobrst@xxxxxxxxx>
Date: Fri, 22 Feb 2013 15:41:45 +0000
Message-id: <FDFBC56B2482EE48850DB651ADF7FEB01E8E54D6@xxxxxxxxxxxxxxxxxx>

Great, thanks much, Samir!

 

Leo

 

From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Samir Tartir
Sent: Friday, February 22, 2013 11:02 AM
To: Ontology Summit 2013 discussion
Subject: Re: [ontology-summit] {quality-methodology} Ontology Summit Track A: Metric Application (session 3)

 

Thanks for the interest. I didn't have the chance to go through the recording of that session yet, as I couldn't unfortunately attend. OntoQA is not currently built to distribute. I built a website sometime ago to enable online access, but after some difficulties, I had to stop it. I will see if I can prepare a downloadable version and release it to the community.

 

Regards,
______________________

Samir Tartir, PhD
ISIICT 2014 Organizing Chair

Faculty of Information Technology
Philadelphia University
PO Box 1
Amman, 19392 Jordan
Office: +962-6-479-9000 Ext. 2515
VOIP: +1-706-363-8679
http://www.philadelphia.edu.jo/academics/startir/


From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [ontology-summit-bounces@xxxxxxxxxxxxxxxx] on behalf of Obrst, Leo J. [lobrst@xxxxxxxxx]
Sent: Friday, February 22, 2013 6:01 PM
To: Ontology Summit 2013 discussion
Subject: Re: [ontology-summit] {quality-methodology} Ontology Summit Track A: Metric Application (session 3)

Thank you, Samir. By the way, the question came up yesterday as to whether OntoQA was available somewhere, possibly for download?

 

Thanks much,

Leo

 

From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Samir Tartir
Sent: Friday, February 22, 2013 10:47 AM
To: Ontology Summit 2013 discussion
Subject: Re: [ontology-summit] {quality-methodology} Ontology Summit Track A: Metric Application (session 3)

 

Hi Astrid and all,

 

I have been following the very interesting discussions with very little time to contribute.

 

Regarding evaluation: I think there needs to be two parts for ontology quality:

  1. General quality: Any ontology must have such properties (must be agreed on by the community), e.g. a certain minimum level of inheritance, a certain minimum number of "meaningful" properties (relations), etc.
  2. Domain-specific quality: as the name indicates, ontologies in certain domains must be of a certain features to be useful in certain domains, which can be set by each sub-community.

What do you think?

 

Regards,

______________________

Dr. Samir Tartir
ISIICT 2014 Organizing Chair

Faculty of Information Technology
Philadelphia University
PO Box 1
Amman, 19392 Jordan
Office: +962-6-479-9000 Ext. 2515
VOIP: +1-706-363-8679
http://www.philadelphia.edu.jo/academics/startir/


From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [ontology-summit-bounces@xxxxxxxxxxxxxxxx] on behalf of Astrid Loum [astrid.duque@xxxxx]
Sent: Tuesday, February 19, 2013 12:30 PM
To: ontology-summit@xxxxxxxxxxxxxxxx
Subject: Re: [ontology-summit] {quality-methodology} Ontology Summit Track A: Metric Application (session 3)

Hello All,

When is a metric or evaluation dimension relevant to ontology quality?. In response to this question,  we presented OQuaRE in the ontology summit session on 31 January.  OQuaRE includes a set of quality characteristics and subcharacteristics, which are measured through a set of metrics.  Some metrics are relevant for some quality characteristics. You can find more information about OQuaRE at http://miuras.inf.um.es/evaluation/oquare.

However we think it is important that the ontology community reaches an agreement on ontology quality criteria. We invite you to contribute to our wiki about  quality criteria, available at http://miuras.inf.um.es/oquarewiki.

Regards, 
Astrid

El 16/02/2013 3:50, Hans Polzer escribió:

While I can’t offer specific guidance on which evaluation metrics might be appropriate/useful in any given evaluation context, I did offer up a presentation on the dimensionality of evaluation context in the ontology summit session on 24 January. This provides a way to characterize different evaluation contexts (or ranges thereof)  in an explicit way so that one can define the conditions under which a given evaluation attribute/metric might be appropriate.

 

Hans

 

From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Amanda Vizedom
Sent: Friday, February 15, 2013 4:01 PM
To: Ontology Summit 2013 discussion
Subject: Re: [ontology-summit] {quality-methodology} Ontology Summit Track A: Metric Application (session 3)

 

Megan, and All,

 

I took the liberty of editing the subject to note the connection to the Track C session Megan mentions.

 

There is an enormously important issue here, critical to the connection between evaluation and quality/suitability.

 

Megan's question is one way of getting at it. A few other ways the issue has appeared within the Ontology Summit context: 

 

- When is a metric or evaluation dimension relevant to ontology quality? For some metrics, the answer might be "always." For most metrics, the answer might be "when any or all of a set of requirements applies, derived from the intended application type and context."

 

- What kinds of evaluation have people used, and for what purposes? When did or didn't the evaluation outcomes correlate to successful use of ontologies? 

 

In my experience, we sometimes get data from particular projects, but not enough to begin to form a well-grounded picture of patterns of relevance. I believe that this is partly because a comparatively small portion of ontology-based projects currently devote substantial, explicit thought to evaluation or to requirements identification, and use case characterization. And there isn't enough communication between projects, or across the broader community, for good cross-pollination and comparison to occur. Without this, when people do evaluation, they tend to simply do whatever they know how to do and have the resources to do, rather than thinking through alternatives and what evaluation is really meaningful and relevant to their particular ontology evaluation (development, selection, etc.) problem.

 

For the quality cross-track of last year's summit, Mike Bennett, Simon Spero and I worked on a survey (on experiences with ontology quality assurance) that was aimed at just this knowledge gap. The complexity of the question, a late start, and other factors (including, in my case and Mike's, having little experience in the hard problems of survey design) challenged us enough that although we got a survey version out the door by the end of the summit, we did not collect enough data to be meaningfully analyzed.

 

There have been suggestions to revise/refactor the survey for this year's summit focus and try again. I can't devote enough time to do this well, especially while also serving as Communique co-editor and working to get and keep the group library up to date. However, I think such a survey (still) would be very interesting and useful, and take us a step toward addressing this knowledge gap. 

 

However, if there are others (Megan?) who would be interested and willing to pitch in reviving and revising this effort, last year's material is still around.  Anyone?

 

Best,

Amanda

 

 

On Tue, Feb 12, 2013 at 1:20 PM, Megan Katsumi <katsumi@xxxxxxxxxxxxxxx> wrote:

Hi All -

I'd like to pose a question regarding the (quality) evaluation metrics that we saw in session 3, and potentially for any related presentations that we may see in the next session for this track:

Have any associated methodologies or guidelines been developed to address how these metrics should be applied in practice?  Specifically, I am interested in what occurs after the metrics' values have been obtained.  How are we to interpret the results of these approaches in order to make decisions about the ontology that we are designing / have designed?  Is the ontology developer expected to attempt to optimize these metrics?  If so, how?  Are there any use cases that demonstrate the value of this sort of application of evaluation metrics?

Regards,
Megan Katsumi



_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013
Community Portal: http://ontolog.cim3.net/wiki/

 

 
 
_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013  
Community Portal: http://ontolog.cim3.net/wiki/ 
  



-- 
 
 

_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013  
Community Portal: http://ontolog.cim3.net/wiki/     (01)
<Prev in Thread] Current Thread [Next in Thread>