OntologySummit2013: Virtual Panel Session-03 - Thu 2013-01-31    (3L6U)

Summit Theme: "Ontology Evaluation Across the Ontology Lifecycle"    (3L6V)

Summit Track Title: Track-A: Intrinsic Aspects of Ontology Evaluation    (3L6W)

Session Topic: Intrinsic Aspects of Ontology Evaluation: Practice and Theory    (3L7D)

Panelists / Briefings:    (3L6X)

Archives:    (3L9T)

Abstract:    (3LBL)

OntologySummit2013 Session-03: "Intrinsic Aspects of Ontology Evaluation: Practice and Theory" - intro slides    (3LBM)

This is our 8th Ontology Summit, a joint initiative by NIST, Ontolog, NCOR, NCBO, IAOA & NCO_NITRD with the support of our co-sponsors. The theme adopted for this Ontology Summit is: "Ontology Evaluation Across the Ontology Lifecycle."    (3LCJ)

Currently, there is no agreed methodology for development of ontologies, and there are no universally agreed metrics for ontology evaluation. At the same time, everybody agrees that there are a lot of badly engineered ontologies out there, thus people use -- at least implicitly -- some criteria for the evaluation of ontologies.    (3LCK)

During this OntologySummit, we seek to identify best practices for ontology development and evaluation. We will consider the entire lifecycle of an ontology -- from requirements gathering and analysis, through to design and implementation. In this endeavor, the Summit will seek collaboration with the software engineering and knowledge acquisition communities. Research in these fields has led to several mature models for the software lifecycle and the design of knowledge-based systems, and we expect that fruitful interaction among all participants will lead to a consensus for a methodology within ontological engineering. Following earlier Ontology Summit practice, the synthesized results of this season's discourse will be published as a Communiqué.    (3LCL)

At the Launch Event on 17 Jan 2013, the organizing team provided an overview of the program, and how we will be framing the discourse around the theme of of this OntologySummit. Today's session is one of the events planned.    (3LBN)

In this 3rd virtual panel session of the Summit, we focus on theory and practice for intrinsic aspects of ontology evaluation. Our speakers will present a number of approaches and frameworks for evaluating the quality of ontologies, and some theoretical discussion of what constitutes intrinsic evaluation. Our main goal in this virtual session is to begin to lay out the criteria for intrinsic evaluation of ontologies, some possible metrics, and the rationale for these. We hope that all of the participants in the open discussion and chat will join us in helping to flesh out intrinsic evaluation criteria and their dimensions.    (3LBO)

More details about this OntologySummit is available at: OntologySummit2013 (homepage for this summit)    (3LBP)

Briefings:    (3L79)

Agenda:    (3LBQ)

OntologySummit2013 - Panel Session-03    (3LBR)

Proceedings:    (3LBX)

Please refer to the above    (3LBY)

IM Chat Transcript captured during the session:    (3LBZ)

 see raw transcript here.    (3LC0)
 (for better clarity, the version below is a re-organized and lightly edited chat-transcript.)
 Participants are welcome to make light edits to their own contributions as they see fit.    (3LC1)
 -- begin in-session chat-transcript --    (3LC2)
	[08:23] PeterYim: Welcome to the    (3LP4)
	 = OntologySummit2013: Virtual Panel Session-03 - Thu 2013-01-31 =    (3LP5)
	Summit Theme: Ontology Evaluation Across the Ontology Lifecycle    (3LP6)
	* Summit Track Title: Track-A: Intrinsic Aspects of Ontology Evaluation    (3LP7)
	Session Topic: Intrinsic Aspects of Ontology Evaluation: Practice and Theory    (3LP8)
	* Session Co-chairs: Dr. SteveRay (CMU) and Dr. LeoObrst (MITRE)    (3LP9)
	Panelists / Briefings:    (3LPA)
	* "A Pitfall Catalogue and OOPS!: An Approach to Ontology Validation" 
	  - Ms. MariaPovedaVillalon (Universidad Politecnica de Madrid)
	  - Dr. MariCarmenSuarezFigueroa (Universidad Politecnica de Madrid)
	  - Dr. AsuncionGomezPerez (Universidad Politecnica de Madrid)    (3LPB)
	* "Ontology Evaluation and Ranking using OntoQA"
	  - Dr. SamirTartir (Philadelphia University, Amman, Jordan)
	  - Dr. IsmailcemBudakArpinar (University of Georgia)
	  - Dr. AmitSheth (Wright State University)    (3LPC)
	* "The OQuaRE Framework for Ontology Evaluation"
	  - Dr. JesualdoTomasFernandezBreis (Universidad de Murcia)
	  - Ms. AstridDuqueRamos (Universidad de Murcia)
	  - Dr. RobertStevens (University of Manchester)
	  - Dr. NathalieAussenacGilles (Institute de Recherche en Informatique de Toulouse (IRIT), Universite Paul Sabatier)    (3LPD)
	Logistics:    (3LPE)
	* Refer to details on session page at: http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2013_01_31    (3LPF)
	* (if you haven't already done so) please click on "settings" (top center) and morph from "anonymous" to your RealName (in WikiWord format)    (3LPG)
	* Mute control: *7 to un-mute ... *6 to mute    (3LPH)
	* Can't find Skype Dial pad?
	** for Windows Skype users: it's under the "Call" dropdown menu as "Show Dial pad"
	** for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later or the earlier Skype versions 2.x,)
	   if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it.    (3LPI)
	 == Proceedings: ==    (3LPJ)
	[09:06] anonymous1 morphed into MariaPovedaVillalon    (3LPK)
	[09:14] anonymous1 morphed into MeganKatsumi    (3LPL)
	[09:16] anonymous1 morphed into CarmenChui    (3LPM)
	[09:19] anonymous morphed into JimDisbrow    (3LPN)
	[09:20] anonymous morphed into JesualdoTomasFernandezBreis    (3LPO)
	[09:20] JesualdoTomasFernandezBreis: Hello all    (3LPP)
	[09:24] anonymous morphed into MohammadAqtash    (3LPQ)
	[09:25] anonymous1 morphed into DavidMakovoz    (3LPR)
	[09:26] MeganKatsumi1 morphed into MichaelGruninger    (3LPS)
	[09:28] LeoObrst: Hi, Jesualdo, Maria, Mohammad and all!    (3LPT)
	[09:28] anonymous morphed into SamirTartir    (3LPU)
	[09:28] MariaPovedaVillalon: Hi!    (3LPV)
	[09:28] LeoObrst: Hi, Samir!    (3LPW)
	[09:28] MariCarmenSuarezFigueroa: Hello all....    (3LPX)
	[09:30] MohammadAqtash: Hi LeoObrst!    (3LPY)
	[09:30] astridduque morphed into AstridDuqueRamos    (3LPZ)
	[09:31] anonymous morphed into MikeDenny    (3LQ0)
	[09:31] anonymous morphed into DougFoxvog    (3LQ1)
	[09:31] anonymous1 morphed into TorstenHahmann    (3LQ2)
	[09:32] anonymous1 morphed into ClarePaul    (3LQ3)
	[09:32] anonymous morphed into JosephTennis    (3LQ4)
	[09:32] anonymous morphed into IsmailcemBudakArpinar    (3LQ5)
	[09:34] anonymous morphed into JamesOdell    (3LQ6)
	[09:35] RamSriram: I do have a problem viewing slides on a Mac (using VNC). It seems to work on a 
	PC.    (3LQ7)
	[09:50] SteveRay: @Ram: My theory remains that this problem relates to the latest version of Java, 
	which somehow keeps the VNC from working properly.    (3LQ8)
	[09:52] ToddSchneider: Steve, Ram, I checked and the browser has the Java plug-in disabled.    (3LQ9)
	[09:36] anonymous morphed into JoaoPauloAlmeida    (3LQA)
	[09:38] PeterYim: == SteveRay opens the session ... see: [0-Chair] slides    (3LQB)
	[09:39] List of members: AmandaVizedom, AnatolyLevenchuk, AstridDuqueRamos, BobbinTeegarden, 
	BobSchloss, IsmailcemBudakArpinar, CarmenChui, ClarePaul, DavidMakovoz, DavidLeal, DougFoxvog, 
	FabianNeuhaus, FranLightsom, HensonGraves, JamesOdell, JesualdoTomasFernandezBreis, JimDisbrow, 
	JoaoPauloAlmeida, JoelBender, JosephTennis, KenBaclawski, LeoObrst, MariaPovedaVillalon, 
	MariCarmenSuarezFigueroa, MarkFox, MatthewWest, MeganKatsumi, MichaelGruninger, MikeDenny, 
	MikeDean, MohammadAqtash, PeterYim, RamSriram, SamirTartir, SteveRay, TerryLongstreth, 
	ToddSchneider, TorstenHahmann, vnc2    (3LQC)
	[09:42] DuaneNickull: Good day    (3LQD)
	[09:42] PeterYim: Hi Duane, welcome to the session    (3LQE)
	[09:42] DuaneNickull: Please to be here.    (3LQF)
	[09:45] PeterYim: == MariaPovedaVillalon presenting ... see: [1-Poveda] slides    (3LQG)
	[09:45] anonymous1 morphed into TrishWhetzel    (3LQH)
	[09:52] SteveRay: We are now on slide 7    (3LQI)
	[09:54] BobSchloss: As I listen to the approach Maria has started doing, it reminds me of some work 
	I did with my colleague Achille Fokoue-Nkoutche in the very early days of the XML Schema language. 
	We released a tool called IBM XML Schema Quality Checker through the IBM alphaWorks program, it was 
	very widely used because this kind of document / message / vocabulary modeling was unfamiliar to a 
	lot of people, and top quality tools for construction of these XML Schemas (such as from companies 
	such as Altova, Progress Software, IBM Rational) were still not widely available.    (3LQJ)
	[09:56] BobSchloss: Equally importantly, guidelines, best practices and patterns for XML Schema 
	development, which were later compiled by a number of people, were not yet documented... so our tool 
	warned people when they were using a construct that was strictly legal but might limit evolvability 
	of their schema or reuse by others.    (3LQK)
	[09:56] BobSchloss: [I have to leave for another meeting... Will review all slides and the recording 
	of this chat later. Thanks all]    (3LQL)
	[09:57] MariCarmenSuarezFigueroa: @BobSchloss, interesting work.    (3LQM)
	[09:56] SteveRay: @Maria: Question for later: To recognize your Pitfall #5, it would seem that you 
	would need to know the intent of a term such as isSoldIn and isBoughtIn. How does your automated 
	tool do this?    (3LQN)
	[09:57] MariCarmenSuarezFigueroa: @SteveRay: at this moment our tool OOPS! detect in an automated 
	way a subset of the pitfalls in the catalogue    (3LQO)
	[09:58] MariCarmenSuarezFigueroa: @SteveRay for those detected by OOPS! there are different 
	approaches as Maria is explaining    (3LQP)
	[09:59] MariCarmenSuarezFigueroa: (is going to explain in next slides)    (3LQQ)
	[09:59] DougFoxvog: @Steve. The inverse relationship between isSoldIn & isBoughtIn can be determined 
	to be inconsistent with having the argument types reversed. By noting that the argument types match 
	(arg1<=>arg1 & arg2<=>arg2) one can suggest that the error is in calling it an inverse relationship 
	instead of being in mis-assignment of argument types.    (3LQR)
	[10:01] SteveRay: @MariCarmenSuarezFigueroa, DougFoxvog: Ah yes, I see now - Domain and Range 
	mismatch.    (3LQS)
	[10:04] DougFoxvog: Slide 14 suggests possible symmetric or transitive properties if there are equal 
	domain & range. Such suggestions should not be made if the relations are already defined as 
	asymmetric or functional.    (3LQT)
	[10:04] MariCarmenSuarezFigueroa: yes @DougFoxvog    (3LQU)
	[10:02] MariCarmenSuarezFigueroa: OOPS! is available at http://www.oeg-upm.net/oops    (3LQV)
	[10:00] anonymous1 morphed into BruceBray    (3LQW)
	[09:59] SamirTartir: Hello Dr. @Arpinar, @MohammadAqtash... Nice to see you here.    (3LQX)
	[10:06] SamirTartir: Some definitions I will be using in the my presentation can be found in this paper: 
	http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4338348&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4338348    (3LQY)
	[10:06] TerryLongstreth: Maria- have you considered running OOPS! against public domain ontologies, 
	and publishing the resulting evaluations?    (3LQZ)
	[10:07] MariCarmenSuarezFigueroa: @TerryLongstreth: thanks for the suggestion. Yes, we have already 
	made an experiment in this sense; and our idea is to evaluate more available ontologies in order to 
	see the current state of public ontologies.    (3LR0)
	[10:09] JesualdoTomasFernandezBreis: @MariCarmenSuarezFigueroa: evaluating OBO and BioPortal 
	ontologies might be interesting    (3LR1)
	[10:10] MariCarmenSuarezFigueroa: @Jesualdo, thanks! We have already evaluate a subset of OBO and 
	BioPortal ontologies, but our plan is to made the evaluation for more    (3LR2)
	[10:10] ToddSchneider: Can the OOPS! source code be obtained?    (3LR3)
	[10:11] MariCarmenSuarezFigueroa: @ToddSchneider: at this moment the source code is not available.    (3LR4)
	[10:12] ToddSchneider: Maria, Too bad. Can your team consider working with OOR initiative?    (3LR5)
	[10:13] MariCarmenSuarezFigueroa: @ToddSchenider: we are analysing how to proceed with our code. 
	Maybe we can consider your suggestion    (3LR6)
	[10:13] MariaPovedaVillalon: we will provide web services soon Todd    (3LR7)
	[10:14] ToddSchneider: Maria's, For performance reasons having a more 'local' instance of OOPS! 
	would be optimal.    (3LR8)
	[10:15] MariCarmenSuarezFigueroa: @Todd, yes you are right. As mentioned, we are now analysing 
	different options for our code :-D    (3LR9)
	[10:15] MariaPovedaVillalon: @ToddSchneider you are right, as Mari Carmen said we need to think 
	about to to proceed with the code    (3LRA)
	[10:15] MariaPovedaVillalon: @Todd I totally agree with the idea of sharing the code    (3LRB)
	[10:16] MariaPovedaVillalon: @Todd as first step we are creating the web services so that everybody 
	can include the features in their code, but sure, we also need to share the code    (3LRC)
	[10:12] LeoObrst: @Maria: 3 questions: 1) These are all OWL/RDF ontologies; are you considering 
	other languages, e.g., Common Logic? 2) What if the ontology you are evaluating contains imports or 
	references to other ontologies, do you track down these and evaluate them? 3) What is "P9. Missing 
	basic information"?    (3LRD)
	[10:13] AmandaVizedom: @Maria: Very nice. I have seen several large projects spend extensive effort 
	in developing their own, in-house versions of something like this approach. This speaks to the 
	relevance of the approach to real work-flows. However, due to lack of resources and/or expertise, 
	those in-house versions usually end up not as good as OOPS! appears to be. They also often generate 
	repeating cycles of management or collaborator doubt, as their developers cannot point to 
	independent grounding and acceptance. OOPS! seems to me like a valuable contribution to operational 
	use of ontologies.    (3LRE)
	[10:14] MariCarmenSuarezFigueroa: @Amanda, thank you for your comment.    (3LRF)
	[10:12] TrishWhetzel: Of the ontologies in BioPortal, do you find errors correlated to any of the 
	groups, such as OBO Foundry or UMLS, and/or ontology format .. which is also somewhat an indicator 
	of ontology design patterns?    (3LRG)
	[10:15] JesualdoTomasFernandezBreis: @TrishWhetzel: in another project we have done a systematic 
	analysis of the labels of bioportal ontologies and we found some problems with formats and 
	availability of some files    (3LRH)
	[10:19] TrishWhetzel: @Jesualdo Thanks, I'm interested in these issues    (3LRI)
	[10:25] JesualdoTomasFernandezBreis: @Trish: I will send you an email with the details    (3LRJ)
	[10:16] MeganKatsumi: @Maria: Sorry if you mentioned this already, but how do you decide if a 
	particular characteristic qualifies as a pitfall?    (3LRK)
	[10:17] MariaPovedaVillalon: @Megan we have observed the pitfalls we list as errors in ontologies 
	when we manually analyzed them, however, the same "characteristic" might not be an error in other 
	ontology, so at the end the user decide. Sometimes the error does not need to be checked but that is 
	not always the case.    (3LRL)
	[10:15] AmandaVizedom: @Maria: A few questions: (1) Is it correct that OOPS! works specifically on 
	OWL? Is it further narrowed to specific dialects (such as DL)? Does your group have any plans or 
	interests in extending to other languages (for example, Common Logic?)    (3LRM)
	[10:19] DougFoxvog: @Amanda, re your first question. OOPS! only accepted OWL RDF/XML when I looked 
	at it in December.    (3LRN)
	[10:16] AmandaVizedom: @Maria: (2) Can OOPS! detect errors (or warnings/suggestions) based on 
	general logical entailments? Does OOPS! make use of, or contain, a general OWL reasoner?    (3LRO)
	[10:19] MariaPovedaVillalon: @Amanda we think about leaving that decision to the user    (3LRP)
	[10:20] MariaPovedaVillalon: @Amanda reasoners do already exist and we don't want to rethink the 
	wheel, however we can benefit from them to detec more pitfall but the computational price will be 
	too high    (3LRQ)
	[10:34] MariaPovedaVillalon: @Amanda in summary, at any case our idea is that we should leave the 
	decision of using reasoners to the user (maybe a checkbox in OOPS!) or point to existing reasoners 
	giving the user some guidelines about which things to check and common errors    (3LRR)
	[10:22] AmandaVizedom: @Maria (3) Have you run into any difficulties concerning what seem to be 
	pitfalls and the behavior of OWL reasoners? An example that comes to mind is the fact if a property 
	is applied used to relate two things, one of which is not stated to satisfy the range requirements, 
	for example, it will be inferred that the thing *does* meet the range requirements. But in some 
	(many?) cases, the omission is actually indicative of an error. Would OOPS! treat as a pitfall or 
	warning the fact that the thing in the range is not stated to meet the range requirements?    (3LRS)
	[10:25] MariaPovedaVillalon: @AmandaVizedom excuse me, if I'm not wrong you talk about instances, 
	right now OOPS! only looks at the schema level    (3LRT)
	[10:26] MariaPovedaVillalon: @Amanda have I answered your question? I do not think I understood it 
	properly, maybe...    (3LRU)
	[10:32] AmandaVizedom: @Maria: That's fine. I think that last question may be confusing if we are 
	accustomed to working at different levels of language expressiveness. If you are working in DL it 
	may be a instance-level issue; less so if working expressiveness beyond DL. Thanks!    (3LRV)
	[10:17] PeterYim: @MariaPovedaVillalon @MariCarmenSuarezFigueroa - we are contemplating on doing a 
	hackathon exercise, it will be great if you team can join us in that effort (we have yet to refine 
	on what exactly would that "hackathon" entail though, so participant input are solicited)    (3LRW)
	[10:17] MariCarmenSuarezFigueroa: @Peter, yes, count with us    (3LRX)
	[10:18] PeterYim: @MariCarmenSuarezFigueroa - fantastic! thank you.    (3LRY)
	[10:18] MariaPovedaVillalon: @Peter sure :)    (3LRZ)
	[10:19] PeterYim: Thanks, Maria.    (3LS0)
	[10:19] KenBaclawski: @Maria: I built a system very similar to yours back in 2004. I called the 
	problems with an ontology symptoms rather than pitfalls, but it is the same idea. It used a 
	rule-based approach which was extendible. One interesting feature was that the symptoms that were 
	generated were in a Symptom ontology and we performed reasoning on the symptoms that were generated 
	to find relationships among symptoms since we found that a single error can generate many symptoms 
	(which, by the way, is the reason for using the word "symptom"). Here is a reference to the paper: 
	K. Baclawski, C. Matheus, M. Kokar and J. Letkowski. Toward a Symptom Ontology for Semantic Web 
	Applications. In ISWC'04, pages 650-667. Lecture Notes in Computer Science 3298:650-667. 
	Springer-Verlag. (2004)    (3LS1)
	[10:20] MariCarmenSuarezFigueroa: @KenBaclawski, thanks for the reference.    (3LS2)
	[10:21] MariaPovedaVillalon: @KenBaclawski, thank you very much for the reference, sounds really 
	interesting and familiar what you said about one error many symptoms...    (3LS3)
	[10:13] DougFoxvog: (re. Maria's slide#14) One can suggest transitivity if the domain is a subclass 
	of the range. They need not be equal.    (3LS4)
	[10:28] MariaPovedaVillalon: @DougFoxvog we check what you said about suggestion in slide 14    (3LS5)
	[10:29] MariaPovedaVillalon: @DougFoxvog we fixed the errors it had in December, OOPS! was supposed 
	to check it and if everything is fine it should work by now    (3LS6)
	[10:30] JesualdoTomasFernandezBreis: I think tools like OOPS! are fundamental for ontology 
	engineers, thanks for your work!    (3LS7)
	[10:31] MariaPovedaVillalon: Thank you @Jesualdo    (3LS8)
	[10:31] MariCarmenSuarezFigueroa: @Jesualdo, thank you very much for your comment!    (3LS9)
	[10:12] PeterYim: == SamirTartir presenting ... see: [2-Tartir] slides    (3LSA)
	[10:14] anonymous1 morphed into DennisWisnosky    (3LSB)
	[10:14] anonymous1 morphed into YuvaTarunVarmaDatla    (3LSC)
	[10:17] anonymous2 morphed into NathalieAussenacGilles    (3LSD)
	[10:20] JosephTennis: can someone extract all the citations being shared and put them in one spot?    (3LSE)
	[10:21] SteveRay: @Joseph: There is one spot we are collecting references. Amanda can say more about 
	that (and did in an earlier session).    (3LSF)
	[10:22] JosephTennis: sweet! thanks!    (3LSG)
	[10:23] SteveRay: @Joseph: I should add that it is our shared responsibility to populate it.    (3LSH)
	[10:28] ToddSchneider: Samir, Would you provide the definitions for each of the variables in your 
	metrics in the chat?    (3LSI)
	[10:28] AmandaVizedom: @Steve and @Joseph I will post a note to the summit list under the {biblio} 
	subject soon. I've been away for a bit and have begun getting the Zotero library caught up. 
	Meanwhile, the library itself is at https://www.zotero.org/groups/ontologysummit2013/items    (3LSJ)
	[10:33] DougFoxvog: @Samir: many of your metrics are metrics for knowledge bases. It might be useful 
	to distinguish the two classes of metrics, and have different scores for ontologies (which define 
	types and relations) and knowledge bases (which define individuals and provide information about the 
	individuals by asserting relations that apply to them).    (3LSK)
	[10:36] DougFoxvog: @Samir: I see that you do distinguish multiple ranking types in slides 16 & 17. 
	But defining different sets of rankings for different types of KBs or ontologies might be useful.    (3LSL)
	[10:34] LeoObrst: @all: I think these evaluation tools and metrics would be very useful in the Open 
	Ontology Repository (OOR). Perhaps the speakers would like to join our OOR group and provide 
	potential services?    (3LSM)
	[10:36] MariCarmenSuarezFigueroa: @Leo, yes, we can consider to join the OOR group and see how we 
	can contribute to it    (3LSN)
	[10:40] MariaPovedaVillalon: @Leo where can we find information to join and contribute to OOR?    (3LSO)
	[10:42] ToddSchneider: Maria, http://ontolog.cim3.net/cgi-bin/wiki.pl?OpenOntologyRepository    (3LSP)
	[10:43] MariaPovedaVillalon: thanks    (3LSQ)
	[10:39] LeoObrst: @Samir: can you provide definitions for your variables?    (3LSR)
	[10:43] SamirTartir: @ToddSchneider & @LeoObrst: There is a large number of variables used. E.g. A 
	set of classes, C., A set of relationships, P. An inheritance function, Hc. A set of class 
	attributes, Att.    (3LSS)
	[10:44] SamirTartir: @ToddSchneider & @LeoObrst: The definitions are all included in the paper I 
	referenced right before I started presenting.    (3LST)
	[10:45] SamirTartir: I will be more than happy to send you the paper if you'd like.    (3LSU)
	[10:45] ToddSchneider: Samir, I missed that reference.    (3LSV)
	[10:46] SamirTartir: Todd, here it is again: 
	http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4338348&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4338348    (3LSW)
	[10:46] ToddSchneider: Samir, Great. Thank you.    (3LSX)
	[10:47] TorstenHahmann: @Samir: before weighting, are the various metrics standardized (to values 
	between 0 and 1, for example)? Otherwise two metric with equal weight may still influence the total 
	score differently    (3LSY)
	[10:47] SamirTartir: @Torsten: Yes.    (3LSZ)
	[10:50] SamirTartir: @DougFoxvog: Not sure what you mean here. Maybe discuss this after the current 
	speaker finishes.    (3LT0)
	[10:51] TorstenHahmann: @Samir: I take your answer as they are standardized.    (3LT1)
	[10:53] SamirTartir: @Torsten. Sorry for not being clear. Yes they are standardized.    (3LT2)
	[10:53] TorstenHahmann: @Samir: thanks.    (3LT3)
	[10:38] PeterYim: == AstridDuqueRamos presenting ... see: [3-DuqueRamos] slides    (3LT4)
	[10:53] JosephTennis: This was great! Wish I could stay longer. I look forward to using some of 
	these metrics for my work. One question I have is what do these metrics look like on different 
	versions of the same ontology? Do they change dramatically? Or do they not change much at all? It 
	might be something we can look at here. In case you're interested in my work on versioning, you can 
	check out my page: http://joseph-t-tennis.squarespace.com/research-streams/ Perhaps we can 
	collaborate?    (3LT5)
	[10:53] JosephTennis: ciao!    (3LT6)
	[10:52] AmandaVizedom: @Astrid (actually, this applies to @Samir's presentation as well): your 
	approach measures some numeric/ topographic qualities of an ontology, such as depth of inheritance, 
	breadth of relationships, number of ancestors. I have seen many cases in which ontology teams, or 
	projects of which they are a part, are required to report such metrics upward to management, and are 
	held in some sense accountable for them, but it is very hard to see whether they are actually 
	indicative of quality (and if so how). It may be that they are meaningful given some interpretation, 
	given some requirements in place, or under some other conditions. What do they mean, in your view?    (3LT7)
	[10:54] TorstenHahmann: +1 to @Amanda's comment: that should be one of the goals of this track (in 
	my opinion)    (3LT8)
	[10:54] JesualdoTomasFernandezBreis: @Amanda: one of the next slides has some comments I think 
	related to yours    (3LT9)
	[10:54] SteveRay: @Amanda & @Astrid: Amanda, I think you are precisely raising the question about 
	intrinsic vs. extrinsic evaluation. Management often cares about the latter more than the former, 
	and sometimes at the expense of attention to the former.    (3LTA)
	[10:57] SamirTartir: @Steve, @Amanda & @Astrid: It's relevant to each user or scenario. I think 
	@Steve 's comment is right on target.    (3LTB)
	[10:58] LeoObrst: @Steve, Amanda, and all: Yes, because the current application is local and of 
	highest priority to management: extrinsic typically is more valued.    (3LTC)
	[10:59] AmandaVizedom: @SteveRay: Although I'm very interested in the intrinsic/extrinsic question, 
	I see *this* question a bit differently. Staying within the intrinsic evaluation topic, there is an 
	independent question about which, of the many intrinsic characteristics an ontology may be said to 
	have, are actually measurements of quality. That a quality exists and can be measured, or even that 
	it has some intuitive or aesthetic appeal, is not enough to establish that it is an aspect of 
	ontology *quality*. The question is: Are these? And if so, why?    (3LTD)
	[11:02] DougFoxvog: +1 @Amanda. many of the mentioned characteristics are *features* of the 
	ontologies. Whether they are measures of *quality* may in some instances be context dependent.    (3LTE)
	[11:03] TorstenHahmann: I agree with @Doug: independent of whether intrinsic metrics are valued by 
	management, we have to figure whether they are correlated to the intended qualities    (3LTF)
	[11:03] LeoObrst: @Amanda: perhaps more to your point, an approach such as OntoClean that uses 
	ontological analysis more clearly may have higher real value/quality, but is not necessarily 
	immediately understandable to management, though application value can be demonstrated.    (3LTG)
	[11:00] SteveRay: We could discuss (at a later time) whether ontology development environments could 
	hard-wire evaluation during the ontology development process.    (3LTH)
	[11:00] SteveRay: @Amanda: Need to think about your question in a few minutes.    (3LTI)
	[11:03] AmandaVizedom: @Steve, et al., it may also be that *extrinsically* relevant characteristics 
	are so consistently relevant in some domain / context of application that folks trained in that 
	context believe them to be *intrinsic*. I do not mean to pre-judge the question for the 
	characteristics I mentioned. Rather that I would be interested in the presenters' thoughts on those, 
	and whether they can offer particular reasons for considering those intrinsic measurements to be 
	measures of *quality*.    (3LTJ)
	[11:01] MatthewWest: An even better approach is to have a develop method that avoids the quality 
	problems.    (3LTK)
	[11:01] PeterYim: == Q&A and Open Discussion ...    (3LTL)
	[11:01] PeterYim: question for all panelists - how do some of the more rigorously developed 
	ontologies (like BFO, DOLCE, PSL, SUMO, CYC, etc.) fare, when put through your 
	evaluation system/tool; anyone tried? observations & insights gained?    (3LTM)
	[11:15] MariCarmenSuarezFigueroa: We have already evaluate DOLCE with OOPS! (among other 
	established ontologies)    (3LTN)
	[11:17] TerryLongstreth: @MariCarmenSuarezFigueroa - is there a URL for the results of the DOLCE 
	evaluation?    (3LTO)
	[11:19] MariCarmenSuarezFigueroa: @terry, results are not available yet    (3LTP)
	[11:15] JesualdoTomasFernandezBreis: We are in the process of evaluating all the BioPortal 
	ontologies with OQuaRE, but we do not have the results yet    (3LTQ)
	[11:21] AmandaVizedom: Following up on Peter's question: For all panelists: What are the 
	expressivity constraints or expectations of these tools? Are they limited to DL ontologies? 
	OWL-Full? Has anyone applied their techniques to ontologies represented in FOL or higher languages?    (3LTR)
	[11:22] MariCarmenSuarezFigueroa: (re. Peter's follow-up question on whether they had tried how the 
	tools scale with larger ontologies like SUMO or CYC) We did not try with Cyc yet, for 
	example.    (3LTS)
	[11:03] anonymous1 morphed into HashemShmaisani    (3LTT)
	[11:03] DuaneNickull: (ref. the reverb/echo when DougFoxvog tried to patch in) Nice audio effects    (3LTU)
	[11:03] MariaPovedaVillalon: :)    (3LTV)
	[11:03] DuaneNickull: Very Dr. Who - ish    (3LTW)
	[11:04] DuaneNickull: Exterminate, exterminate, exterminate.....    (3LTX)
	[11:04] MariCarmenSuarezFigueroa: :O    (3LTY)
	[11:04] AmandaVizedom: Audio sounds like we have fallen down the rabbit hole!    (3LTZ)
	[11:04] BobbinTeegarden: @DougFoxvog: 'context dependent'... or, in the eye of the beholder?    (3LU0)
	[11:05] MatthewWest: Some very interesting presentations, but I'm afraid I have to go now.    (3LU1)
	[11:05] anonymous morphed into AsuncionGomezPerez    (3LU2)
	[11:08] JimDisbrow: Steve's first point of a measurement being "well-designed" is: "Proper use of 
	various relations found within an ontology". This has been an issue that has been sorely 
	underrepresented, but may now be breaking through - as demonstrated by the presentations. Insertion 
	of reflexivity in relationships, however, was not mentioned as a criteria. Is there any progress in 
	this ontological concepts implementations? (ref. below [11:28])    (3LU3)
	[11:11] LeoObrst: @Doug: I think Samir's analysis of both ontology and KB are very useful for 
	ontologists, even though KBs will potentially be different across applications, companies, etc.    (3LU4)
	[11:12] DougFoxvog: @Leo: I agree. But I'm suggesting that these distinctions should be identified.    (3LU5)
	[11:15] TerryLongstreth: @Leo - I'm not convinced that there's an objective procedure for separating 
	the two. Linnean classification requires a 'type instance' to fully describe a species type. Would 
	that be in the Ontology, or in the KB?    (3LU6)
	[11:15] SamirTartir: Thanks Leo. Doug: I agree that it might be useful.    (3LU7)
	[11:18] SamirTartir: (re. Amanda's positive verbal remark about the relationship diversity metric) 
	Thanks @Amanda    (3LU8)
	[11:19] LeoObrst: @Terry: to your point, that is an issue. E.g., usually classes are considered 
	universals, and instances particulars, but some ontologies (and metaphysics) don't makes those 
	distinctions, e.g., identifying all "ontology" notions as particulars (e.g., tropes, etc.)    (3LU9)
	[11:20] DougFoxvog: @Terry: OWL DL does not allow meta-classes, so that the instances of species, 
	genus, bio-kingdom, etc. are classes, themselves. A system that merely defines these, their 
	hierarchy, and relations that may apply to them would be, imho, an ontology. However, if data is 
	provided about these taxons (geological range, endangerment, diet, etc.), then it would be a KB, 
	even though what is being described are themselves classes.    (3LUA)
	[11:22] TerryLongstreth: Then I think evaluation has to include the KB if it's required for full 
	interpretation of the ontology    (3LUB)
	[11:22] LeoObrst: @Terry: I agree. Both need to be evaluated.    (3LUC)
	[11:28] JimDisbrow: @Steve: In your first slide, your first point of a measurement for being 
	well-designed is: "Proper use of various relations found within an ontology". This has been an 
	issue that has been sorely underrepresented, but may now be breaking through - as demonstrated by 
	the presentations. Insertion of reflexivity in relationships, however, was not mentioned as a 
	criteria. Similarly, there was no mention of an active "not" verb (not just the English negation 
	term), concatenated into the middle term of the OWL "triple". A question for the presenters: Is 
	there any progress in implementing these ontological concepts?    (3LUD)
	[11:31] SteveRay: Anybody want to comment on Jim's question about addressing reflexivity?    (3LUE)
	[11:34] JimDisbrow: I would offer that an ontology without proper use of relationships cannot claim 
	to claim "quality".    (3LUF)
	[11:22] LeoObrst: @Amanda: (re. Amanda's verbal remark questioning some of the metrics and how they 
	relate to "quality") Is your issue about the definition of "quality"? I think the notion of quality 
	will vary between an ontologist and an application user/manager.    (3LUG)
	[11:25] AmandaVizedom: @Leo, yes, I am asking whether -- and if so, why -- these characteristics are 
	intrinsic aspects of *quality*. It could be that they are intrinsic metrics, but the relevance to 
	quality depends on extrinsic factors.    (3LUH)
	[11:25] AmandaVizedom: @Samir, I think that nails it. Thank you. (re. Samir's verbal response.)    (3LUI)
	[11:25] SamirTartir: @Amanda: Thank you.    (3LUJ)
	[11:25] MariaPovedaVillalon: In addition there is a temporal aspect on that, one class can have few 
	instances today, but will populated later    (3LUK)
	[11:28] AmandaVizedom: @Leo and @Samir: My reason for wanting the relationship addressed may also be 
	a reason that some reviewers objected: there is a lot of history of these being treated as quality 
	metrics, without any obvious reason. @Samir, if that's true, then making clear the relationship you 
	see, as you articulated it, might well satisfy those critics.    (3LUL)
	[11:29] DougFoxvog: @Maria & @Mari: if one has several local ontologies, one that includes the 
	other, can the combined ontologies be analyzed together?    (3LUM)
	[11:29] MariaPovedaVillalon: @doug you can either make them available online so that the owl:imports 
	can be resolved or gather them in one file and paste it into OOPS! textbox    (3LUN)
	[11:31] LeoObrst: Behind some of this discussion is the presupposition that evaluation is only about 
	quality.    (3LUO)
	[11:31] PeterYim: @Leo - is that presupposition proper (in the context of this summit) or not?    (3LUP)
	[11:32] AmandaVizedom: Suggestion: So, we see the likelihood that there are (many, I'd say) 
	*intrinsic* characteristics of an ontology such the relevance of each characteristics to quality / 
	suitability is *extrinsic* (at least partially).    (3LUQ)
	[11:33] SteveRay: @Amanda: I agree with you.    (3LUR)
	[11:34] LeoObrst: @Peter: I think comparison of ontologies is an important issue for ontology 
	evaluation, and one person's notion of "quality" may vary from another person's, so comparing 
	different metrics and allowing weighting of various metrics may be useful.    (3LUS)
	[11:34] SamirTartir: @Amanda: You mean intrinsic-extrinsic links? That's a good idea.    (3LUT)
	[11:37] AmandaVizedom: @Leo, that may be so. Or it may simply be that we want/need to clarify the 
	relationship. Some evaluations (or evaluation tools) may be designed to rank ontologies by quality 
	without further information. Those, IMHO, are misguided. What is more promising, IMHO, is a 
	framework/toolkit with the capability of evaluating many characteristics, perhaps neutrally to their 
	relevance to quality in specific cases. It could be up to the user to select which characteristics 
	they care about. Or, in my fantasy system (such as that which JoanneLuciano and other have 
	proposed), a tool in which the use case could be described and ontologies evaluated according to 
	relevant metrics.    (3LUU)
	[11:37] TerryLongstreth: @Leo - any metacharacteristic may be the basis for a quality judgement, 
	depending on what's important to the user community. Size for example may be impactful in 
	determining what systems resources will need procurement actions.'    (3LUV)
	[11:38] TorstenHahmann: Some addition to the example that Fabian used in his remark (depth may be 
	useful only in specific contexts): the context also determines how to measure depth. There are 
	dozens of ways one could measure depth, for example: average, shallowest, deepest, relative to 
	breadth, standard deviation, etc. Which of those metrics properly measures the intended quality (a 
	quality "specificity", for example)? Something to explore in the future.    (3LUW)
	[11:40] DougFoxvog: @Torsten: that would depend upon the task. It might be interesting to define 
	desired features of ontologies & KBs using some of the metrics that have been described.    (3LUX)
	[11:37] PeterYim: great session ... thanks everyone!    (3LUY)
	[11:37] SamirTartir: Thank you all. Very interesting, looking forward to more discussions.    (3LUZ)
	[11:37] JimDisbrow: thanks and bye    (3LV0)
	[11:37] MariCarmenSuarezFigueroa: Thank you very much for your comments and suggestions.    (3LV1)
	[11:37] MariCarmenSuarezFigueroa: Bye!!    (3LV2)
	[11:37] AsuncionGomezPerez: bye    (3LV3)
	[11:37] LeoObrst: Thanks all!    (3LV4)
	[11:37] MohammadAqtash: Thanks All Bye!    (3LV5)
	[11:37] MariaPovedaVillalon: Thank you for your comments :-) bye    (3LV6)
	[11:37] JesualdoTomasFernandezBreis: Thanks and bye!!    (3LV7)
	[11:38] DougFoxvog: This was a very good session! Bye!    (3LV8)
	[11:38] PeterYim: join us again, same time next week, for OntologySummit2013 session-04: "Building 
	Ontologies to Meet Evaluation Criteria - I" - Co-chairs: MatthewWest & MikeBennett - 
	http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2013_02_07    (3LV9)
	[11:37] PeterYim: -- session ended: 11:37 am PST --    (3LVA)
	[11:38] List of attendees: AmandaVizedom, AnatolyLevenchuk, AstridDuqueRamos, AsuncionGomezPerez, 
	BobSchloss, BobbinTeegarden, BruceBray, IsmailcemBudakArpinar, CarmenChui, ClarePaul, DavidMakovoz, 
	DavidLeal, DennisWisnosky, DougFoxvog, DuaneNickull, FabianNeuhaus, FranLightsom, GeraldRadack, 
	HashemShmaisani, HensonGraves, JamesOdell, JesualdoTomasFernandezBreis, JimDisbrow, 
	JoaoPauloAlmeida, JoelBender, JosephTennis, KenBaclawski, LeoObrst, MariCarmenSuarezFigueroa, 
	MariaPovedaVillalon, MarkFox, MatthewWest, MeganKatsumi, MichaelGruninger, MikeDenny, MikeDean, 
	MohammadAqtash, NathalieAussenacGilles, PeterYim, RamSriram, SamirTartir, SteveRay, TerryLongstreth, 
	ToddSchneider, TorstenHahmann, TrishWhetzel, YuvaTarunVarmaDatla, vnc2    (3LVB)
 -- end of in-session chat-transcript --    (3LC3)

Additional Resources:    (3LCA)


How To Join (while the session is in progress)    (3L6P)

Conference Call Details    (3LA2)

Attendees    (3LAZ)