OntologySummit2013: Virtual Panel Session-02 - Thu 2013-01-24    (3K0Q)

Summit Theme: "Ontology Evaluation Across the Ontology Lifecycle"    (3K0R)

Summit Track Title: Track-B: Extrinsic Aspects of Ontology Evaluation    (3K0S)

Session Topic: Extrinsic Aspects of Ontology Evaluation: Finding the Scope    (3L0R)

Panelists / Briefings:    (3L0T)

Archives:    (3L0Z)

Abstract:    (3L2E)

OntologySummit2013 Session-02: "Extrinsic Aspects of Ontology Evaluation: Finding the Scope" - intro slides    (3L2F)

This is our 8th Ontology Summit, a joint initiative by NIST, Ontolog, NCOR, NCBO, IAOA & NCO_NITRD with the support of our co-sponsors. The theme adopted for this Ontology Summit is: "Ontology Evaluation Across the Ontology Lifecycle."    (3L2G)

Currently, there is no agreed methodology for development of ontologies, and there are no universally agreed metrics for ontology evaluation. At the same time, everybody agrees that there are a lot of badly engineered ontologies out there, thus people use -- at least implicitly -- some criteria for the evaluation of ontologies.    (3L2H)

During this OntologySummit, we seek to identify best practices for ontology development and evaluation. We will consider the entire lifecycle of an ontology -- from requirements gathering and analysis, through to design and implementation. In this endeavor, the Summit will seek collaboration with the software engineering and knowledge acquisition communities. Research in these fields has led to several mature models for the software lifecycle and the design of knowledge-based systems, and we expect that fruitful interaction among all participants will lead to a consensus for a methodology within ontological engineering. Following earlier Ontology Summit practice, the synthesized results of this season's discourse will be published as a Communiqué.    (3L2I)

At the Launch Event on 17 Jan 2013, the organizing team provided an overview of the program, and how we will be framing the discourse around the theme of of this OntologySummit. Today's session is one of the events planned.    (3L3T)

As the area of ontology evaluation is still new and its boundaries and dimensions have yet to be defined. We propose to ask the community (panelists and participants alike) to provide input for the dimensions of ontology evaluation during this session, and methodologies that can be applied.    (3L2J)

More details about this OntologySummit is available at: OntologySummit2013 (homepage for this summit)    (3L2K)

Briefings:    (3L2L)

Agenda:    (3L2W)

OntologySummit2013 - Panel Session-02    (3L2X)

Proceedings:    (3L33)

Please refer to the above    (3L34)

IM Chat Transcript captured during the session:    (3L35)

 see raw transcript here.    (3L36)
 (for better clarity, the version below is a re-organized and lightly edited chat-transcript.)
 Participants are welcome to make light edits to their own contributions as they see fit.    (3L37)
 -- begin in-session chat-transcript --    (3L38)
	[09:03] PeterYim:  Welcome to the    (3LI1)
	 = OntologySummit2013: Virtual Panel Session-02 - Thu 2013-01-24 =    (3LI2)
	Summit Theme: Ontology Evaluation Across the Ontology Lifecycle    (3LI3)
	* Summit Track Title: Track-B: Extrinsic Aspects of Ontology Evaluation    (3LI4)
	Session Topic: Extrinsic Aspects of Ontology Evaluation: Finding the Scope    (3LI5)
	* Session Co-chairs: Dr. ToddSchneider (Raytheon) and Mr. TerryLongstreth (Independent Consultant)    (3LI6)
	Panelists / Briefings:    (3LI7)
	* Dr. ToddSchneider (Raytheon) & Mr. TerryLongstreth (Independent Consultant) - "Evaluation Dimensions, A Few"
	* Mr. HansPolzer (Lockheed Martin Fellow (ret.)) - "Dimensionality of Evaluation Context for Ontologies"
	* Ms. MaryBalboni et al. (Raytheon) - "Black Box Testing Paradigm in the Lifecycle"
	* Ms. MeganKatsumi (University of Toronto) - "A Methodology for the Development and Verification of Expressive Ontologies"    (3LI8)
	Logistics:    (3LI9)
	* Refer to details on session page at: http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2013_01_24    (3LIA)
	* (if you haven't already done so) please click on "settings" (top center) and morph from "anonymous" to your RealName (in WikiWord format)    (3LIB)
	* Mute control: *7 to un-mute ... *6 to mute    (3LIC)
	* Can't find Skype Dial pad?
	** for Windows Skype users: it's under the "Call" dropdown menu as "Show Dial pad"
	** for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later or the earlier Skype versions 2.x,)
           if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it.    (3LID)
	Attendees: ToddSchneider (co-chair), TerryLongstreth (co-chair), AlanRector, AnatolyLevenchuk, 
	AngelaLocoro, BobSchloss, BobbinTeegarden, CarmenChui, DaliaVaranka, DonghuanTang, FabianNeuhaus, 
	FranLightsom, FrankOlken, GaryBergCross, HansPolzer, JackRing, JoelBender, JohnBilmanis, 
	KenBaclawski, LalehJalali, LeoObrst, MariCarmenSuarezFigueroa, MaryBalboni, MatthewWest, 
	MaxPetrenko, MeganKatsumi, MichaelGruninger, MikeDean, MikeRiben, OliverKutz, PavithraKenjige, 
	QaisAlKhazraji, PeterYim, RamSriram, RichardMartin, RosarioUcedaSosa, SteveRay, TillMossakowski, 
	TorstenHahmann, TrishWhetzel    (3LIE)
	 == Proceedings: == . [08:57] anonymous morphed into Donghuan    (3LIF)
	[09:13] Donghuan morphed into PennState:Qais    (3LIG)
	[09:14] PennState:Qais morphed into PennState    (3LIH)
	[09:17] anonymous1 morphed into MaxPetrenko    (3LII)
	[09:21] anonymous2 morphed into MaryBalboni    (3LIJ)
	[09:23] anonymous1 morphed into CarmenChui    (3LIK)
	[09:23] anonymous1 morphed into FabianNeuhaus    (3LIL)
	[09:24] PennState morphed into Donghuan    (3LIM)
	[09:24] Donghuan morphed into Qais    (3LIN)
	[09:24] Qais morphed into PennState    (3LIP)
	[09:26] PennState morphed into Qais_Donghuan    (3LIT)
	[09:24] anonymous morphed into Angela Locoro    (3LIO)
	[09:25] Angela Locoro morphed into AngelaLocoro    (3LIQ)
	[09:26] anonymous morphed into JohnBilmanis    (3LIR)
	[09:26] anonymous morphed into SteveRay    (3LIS)
	[09:27] MichaelGruninger morphed into MeganKatsumi    (3LIU)
	[09:29] MatthewWest: Just a note, but the Session page shows the conference starting at 1630 UTC 
	when it is actually 1730 UTC.    (3LIV)
	[09:55] PeterYim: @MatthewWest - thank you for the prompt ... sorry, everyone, the session 
	start-time should be: 9:30am PST / 12:30pm EST / 6:30pm CET / 17:30 GMT/UTC    (3LIW)
	[09:30] anonymous morphed into RosarioUcedaSosa    (3LIX)
	[09:31] anonymous morphed into RamSriram    (3LIY)
	[09:33] anonymous1 morphed into TorstenHahmann    (3LIZ)
	[09:55] anonymous morphed into laleh    (3LJ0)
	[09:59] PeterYim: @laleh - would be kindly provide your real name (in WikiWord format, if you 
	please) and morph into the with "Settings" (botton at top center of window)    (3LJ1)
	[10:01] laleh morphed into LalehJalali    (3LJ2)
	[10:03] PeterYim: @LalehJalali - thank you, welcome to the session ... are you one of RameshJain's 
	students at UCI?    (3LJ3)
	[10:08] LalehJalali: Yes    (3LJ4)
	[09:34] PeterYim: == [0-Chair] ToddSchneider & TerryLongstreth (co-chairs) opening the session ...    (3LJ5)
	[09:37] anonymous morphed into FrankOlken    (3LJ6)
	[09:39] PeterYim: == [2-Polzer] HansPolzer presenting ...    (3LJ7)
	[09:40] List of members: AlanRector, AnatolyLevenchuk, AngelaLocoro, BobbinTeegarden, BobSchloss, 
	CarmenChui, DaliaVaranka, FabianNeuhaus, FrankOlken, FranLightsom, HansPolzer, JoelBender, 
	JohnBilmanis, KenBaclawski, LeoObrst, MariCarmenSuarezFigueroa, MaryBalboni, MatthewWest, 
	MaxPetrenko, MeganKatsumi, MichaelGruninger, MikeDean, MikeRiben, OliverKutz, PeterYim, 
	Qais_Donghuan, RamSriram, RichardMartin, RosarioUcedaSosa, SteveRay, TerryLongstreth, ToddSchneider, 
	TorstenHahmann, vnc2    (3LJ8)
	[09:42] anonymous morphed into TrishWhetzel    (3LJ9)
	[09:44] MikeRiben: are we on slide 5?    (3LJA)
	[09:47] JackRing: Pls stop using "Next Slide" and say number of slide    (3LJB)
	[09:47] anonymous morphed into GaryBergCross    (3LJC)
	[09:52] ToddSchneider: Jack, Hans is on slide 7.    (3LJD)
	[09:45] JackRing: Is your Evaluation Context different from. Ontology Context?    (3LJE)
	[09:56] ToddSchneider: Qais, if you have a question would type it in the chat box?    (3LJF)
	[09:56] PeterYim: @Qais_Donghuan - we will hold questions off till after the presentations are done, 
	please post your questions on the chat-space (as a placeholder/reminder) for now    (3LJG)
	[09:55] TerryLongstreth: On slide 8, Hans mentions reasoners as an aspect of the ontology, but as 
	Uschold has pointed out, the reasoner may be used as a test/evaluation tool    (3LJH)
	[09:57] ToddSchneider: Terry, the evaluation(s) may need to be redone if the reasoner is changed.    (3LJI)
	[10:03] TerryLongstreth: Sure. I was just pointing out that the reasoner may be a tool for extrinsic 
	evaluation.    (3LJJ)
	[10:04] ToddSchneider: Terry, yes a tool used in evaluation and the subject of evaluation itself 
	(e.g., performance).    (3LJK)
	[10:08] SteveRay: @Hans: It would help if you could provide some concrete examples that would bring 
	your observations into focus.    (3LJL)
	[10:10] MichaelGruninger: @Hans: In what sense is ontology compatibility considered to be a rating?    (3LJM)
	[10:09] PeterYim: == [1-Schneider] ToddSchneider presenting, and soliciting input on Ontology 
	Evaluation dimensions ...    (3LJN)
	[10:01] JackRing: (ref. ToddSchneider's solicitation for input on dimensions) Reusefulness of an 
	ontology or subset(s) thereof?    (3LJO)
	[10:08] JackRing: This is a good start toward an ontology of ontology evaluation but we have a 
	loooong way to go.    (3LJP)
	[10:10] anonymous morphed into PavithraKenjige    (3LJQ)
	[10:15] JackRing: In systems think the three basic dimensions are Quality, Parsimony, Beauty    (3LJR)
	[10:15] ToddSchneider: The URL for adding to the list of possible evaluation dimensions is 
	http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013_Extrinsic_Aspects_Of_Ontology_Evaluation_CommunityInput    (3LJS)
	[10:15] MariCarmenSuarezFigueroa: In the legal part, maybe we should consider also license (and not 
	only copyright)    (3LJT)
	[10:15] TerryLongstreth: Thanks Mari Carmen    (3LJU)
	[10:16] FabianNeuhaus: @Todd, we need more than a list. We need definitions of the terms on your 
	evaluation dimensions list, because they are not self-explanatory.    (3LJV)
	[10:16] MatthewWest: Relevance, Clarity, Consistency, Accessibility, timeliness,completeness, 
	accuracy, costs (development, maintenance), Benefits    (3LJW)
	[10:17] MatthewWest: Provenance    (3LJX)
	[10:17] ToddSchneider: Fabian, yes we will need definitions, context, and possibly intent. But first 
	I'd like to conduct a simple gathering exercise.    (3LJY)
	[10:18] MatthewWest: Modularity    (3LJZ)
	[10:17] FabianNeuhaus: @Todd: it seems that your "evaluation dimensions" are very different from 
	Hans' dimensions.    (3LK0)
	[10:20] ToddSchneider: Fabian, yes. Hans was talking about context. I'm thinking of things more 
	directly related to evaluation criteria. Both Hans and I like metaphors from physics.    (3LK1)
	[10:48] LeoObrst: @Todd: your second set of slides, re: slide 4: Precision, Recall, Coverage, 
	Correctness and perhaps others will also be important for Track A Intrinsic Aspects of Ontology 
	Evaluation. Perhaps your metrics will be: Precision With_Respect_To(domain D, requirement R), etc.? 
	Just a thought.    (3LK2)
	[10:21] PeterYim: == [3-Balboni] MaryBalboni presenting ...    (3LK3)
	[10:21] TerryLongstreth: Mary's term: CSCI - Computer Software Configuration Item - smallest unit of 
	testing at some level (varies by customer: sometimes a module, sometimes a capability ...)    (3LK4)
	[10:23] TerryLongstreth: Current speaker - MaryBalboni - slides 3-Balboni    (3LK5)
	[10:27] BobbinTeegarden: @Mary, slide 4 testing continuum -- may need to go one more step: critical 
	testing' is in actual usage (step beyond beta) and that feedback loop that creates continual 
	improvement. Might want to extend the thinking to 'usage as a test' and ongoing criteria in field 
	usage?    (3LK6)
	[10:29] TerryLongstreth: @Bobbin - good point and note that in many cases, evaluation may not start 
	until (years?) after the ontology has been put into continuous usage    (3LK7)
	[10:29] TillMossakowski: how does it work that injection of bugs leads to finding more (real) bugs? 
	Just because there is more overall debugging effort?    (3LK8)
	[10:30] FabianNeuhaus: @Till: I think it allows you to evaluate the coverage of your tests.    (3LK9)
	[10:33] JackRing: It seems that your testing is focused on finding bugs as contrasted to discovering 
	dynamic and integrity limits. Instead of "supports system conditions" it should be 'discovers how 
	ontology limits system envelope"    (3LKA)
	[10:35] JackRing: Once we understand how to examine a model for progress properties and integrity 
	properties we no longer need to run a bunch of tests to determine ontology efficacy.    (3LKB)
	[10:29] SteveRay: @Mary: Some of your testing examples look more like what we would call intrinsic 
	evaluation. Specifically I'm thinking of your example of finding injected bugs.    (3LKC)
	[10:59] MaryBalboni: @SteveRay: Injected bugs - yes it is intrinsic to those that inject the 
	defects, but would be extrinsic to the testers that are discovering defects ...    (3LKD)
	[11:01] SteveRay: @Mary: I would agree with you provided that the testers are testing via blackbox 
	methods such as performance given certain inputs, and not by examining the code for logical or 
	structural bugs. Are we on the same page?    (3LKE)
	[11:03] MaryBalboni: @SteveRay - absolutely!    (3LKF)
	[10:49] BobbinTeegarden: @JackRing Would 'effectiveness' fall under beauty? What criteria?    (3LKG)
	[10:58] JackRing: @Bobbin, Effect-iveness is a Quality factor. Beauty is in the eye of the 
	beer-holder.    (3LKH)
	[10:37] TerryLongstreth: Example of business rule: ask bank for email when account drops below $200. 
	Evaluate by cashing checks until balance below threshold.    (3LKI)
	[10:36] ToddSchneider: Leo, have you cloned yourself?    (3LKJ)
	[10:37] LeoObrst: No, I had to reboot firebox and it had some fun.    (3LKK)
	[10:41] JackRing: No one has mentioned the dimension of complexness. Because ontologies quickly 
	become complex topologies then the response time becomes very important if implemented on a von 
	Neumann architecture. Therefore the structure of the ontology for efficiency of response becomes an 
	important dimension.    (3LKL)
	[10:42] BobbinTeegarden: At DEC, we used an overlay on all engineering for RAMPSS -- Reliability, 
	Availability, Maintainability, Performance, Scalability, and Security. Maybe these all apply for 
	black box here? Mary has cited some of them...    (3LKM)
	[10:56] MaryBalboni: @BobbinTeegarden: re ongoing criteria in field usage - yes during what we call 
	sustainment after delivery, upgrades are sent out acceptance tests are repeated and depending on how 
	much is changed, the testing may only be regression of specific areas in the system..    (3LKN)
	[10:43] LeoObrst: @MaryBalboni: re: slide 14: back in the day, we would characterize 3 kinds of 
	integrity: 1) domain integrity (think value domains in a column, i.e., char, int, etc.), 2) 
	referential integrity (key relationships: primary/foreign), 3) semantic integrity (now called 
	business rules). Ontologies do have these issues. On the ontology side, they can be handled 
	slightly differently: e.g., referential integrity (really mostly structural integrity) will be 
	handled differently based on Open World Assumption (e.g., in OWL) or Closed World Assumption (e.g., 
	in Prolog), with the latter being enforced in general by integrity constraints.    (3LKO)
	[10:52] MaryBalboni: @LeoObrst - thanks for feedback - since I am not an expert in Ontology it is 
	very nice to see that these testing paradigms are reusable - and tailorable.    (3LKP)
	[10:44] PeterYim: == [4-Katsumi] MeganKatsumi presenting ...    (3LKQ)
	[10:53] LeoObrst: @Megan: NicolaGuarino for our upcoming (Mar. 7, 2013) Track A session will talk 
	along the lines of your slides 8, etc.    (3LKR)
	[10:52] TillMossakowski: Is it always clear what the intended models are? After all, initially you 
	will have only an informal understanding of the domain, which will be refined during the process of 
	formalisation. Only in this process, the class of intended models becomes clearer.    (3LKS)
	[10:54] MichaelGruninger: @Till: At any point in development, we are working with a specific set of 
	intended models, which is why we call this verification. Validation is addressing the question of 
	whether or not we have the right set of intended models.    (3LKT)
	[10:56] MichaelGruninger: We formalize the ontology's requirements as the set of intended models (or 
	indirectly as a set of competency questions). It might not always be clear what the intended models 
	are, but this is analogous to the case in software development when we are not clear as to what the 
	requirements are.    (3LKU)
	[10:56] TillMossakowski: @Michael: OK, that is similar as in software validation and verification. 
	But then validation should be mentioned, too.    (3LKV)
	[10:56] ToddSchneider: Michael, so there's a presumption that you have extensive explicit knowledge 
	of the intended model(s), correct?    (3LKW)
	[10:58] MichaelGruninger: @Todd: since intended models are the formalization of the requirements, 
	extensive explicit knowledge of intended models is equivalent to "extensive explicit knowledge 
        about the requirements"    (3LKX)
	[10:57] LeoObrst: @Till, Michael: one issue is the mapping of the "conceptualization" to the 
	intended models, right? I guess Michael's requirements are in affect statements/notions of the 
	conceptualization. Is that right?    (3LKZ)
	[10:59] MichaelGruninger: @LeoObrst: I suppose there could be the case where someone incorrectly 
	specified the intended models or competency questions that formalize a particular requirement (i.e. 
	the conceptualization is wrong)    (3LL0)
	[10:59] TillMossakowski: It seems that two axiomatisations (requirements and design) are compared 
	with each other. The requirements describe the intended models. Is this correct?    (3LL1)
	[11:00] MichaelGruninger: @Till: We would say that the intended models describe the requirements.    (3LL2)
	[11:01] MichaelGruninger: @Till: The notion of comparing axiomatizations arises primarily when we 
	use the models of some other ontology as a way of formalizing the intended models of the ontology we 
	are evaluating    (3LL3)
	[11:02] TillMossakowski: @Michael: but you cannot give the set of intended models to a prover, only 
	an axiomatisation of it. Hence it seems that you are testing two different axiomaisations against 
	each other.    (3LL4)
	[11:00] ToddSchneider: All, due to a changing schedule I need to leave this session early. Cheers.    (3LL5)
	[11:02] MariCarmenSuarezFigueroa: We could also consider the verification of requirements 
	(competency questions) using e.g. SPARQL queries.    (3LL6)
	[11:04] PeterYim: @MeganKatsumi - ref. your slide#4 ... would you see some "fine tuning" after the 
	ontology has been committed to "Application" - adjustment to the "Requirements" and "Design" 
	possibly?    (3LL7)
	[11:06] TerryLongstreth: Fabian suggests that Megan's characterization of semantic correctness is 
	too strong...    (3LL8)
	[11:09] MichaelGruninger: @Till: Yes, when we use theorem proving, we need to use the axiomatization 
	of another theory. However, there are also cases in which we verify an ontology directly in the 
	metatheory. In terms of COLORE, we need to use this latter approach for the core ontologies.    (3LL9)
	[11:10] TorstenHahmann: @Till: but you can give individual models to a theorem prover. It is a 
	question how to come up with a good set of models to evaluate the axiomatization.    (3LLA)
	[11:11] TillMossakowski: OK, but this probably means that you have a set of intended models that is 
	more exemplary than exhaustive.    (3LLB)
	[11:11] FabianNeuhaus: @Till, Michael. It seems to me that Till has a good point. Especially if the 
	ontology and the set of axioms that express the requirements both have exactly the same models, it 
	seems that you just have two equivalent axiom sets (ontologies)    (3LLC)
	[11:12] TorstenHahmann: Yes, of course, the same as with software verification.    (3LLD)
	[11:12] TillMossakowski: indeed, but sometimes it might just be an implication    (3LLE)
	[11:15] TillMossakowski: further dimensions: consistency; correctness w.r.t. intended models (as in 
	Megan's talk), completeness in the sense of having intended logical consequences    (3LLF)
	[11:16] MeganKatsumi: @Leo: I'm not sure that I understand your question, can you give an example?    (3LLG)
	[11:03] LeoObrst: @Megan: what if you have 2 or more requirements, e.g., going from a 2-D to a 3-D 
	or 4-D world?    (3LLH)
	[11:17] PeterYim: == Q&A and Open Discussion ... soliciting of additional thoughts on Evaluation 
	Dimensions    (3LLI)
	[11:17] BobbinTeegarden: It seems we have covered correctness, precision, meeting requirements, etc 
	well, but have we really addressed 'goodness' of an ontology? And certainly haven't addressed an 
	elegant' ontology, or do we care? Is this akin to Jack's 'beauty' assessment?    (3LLJ)
	[11:17] BobSchloss: Because of the analogy we heard with Database Security Blackbox Assessment, I 
	wonder if there is an analogy to "normalization" (nth normal form) for database schemas. Is some 
	evaluation criteria related to factoring, simplicity, minimalism, straightforwardness.....    (3LLK)
	[11:19] TorstenHahmann: another requirement that I think hasn't been mentioned yet: granularity 
	(level of detail)    (3LLL)
	[11:21] LeoObrst: @Torsten: yes, that was my question, i.e., granularity.    (3LLM)
	[11:22] TorstenHahmann: @Leo: I thought so.    (3LLN)
	[11:22] MariCarmenSuarezFigueroa: I'm also think granularity is a very important dimension....    (3LLO)
	[11:19] BobSchloss: I am also thinking about issues of granularity and regularity ... If a program 
	wants to remove one instance "entity" from a knowledge base, does this ontology make it very simple 
	to just do the remove/delete, or is it so interconnected that removal requires a much more 
	complicated syntax....    (3LLP)
	[11:24] BobSchloss: Although this is driven by the domain, some indication of an ontology's rate of 
	evolution or degree of stability or expected rate of change may be important to those using 
	organizations. If there are 2 ontologies, and one, by being very simple and universal, doesn't have 
	as many specifics but will be stable for decades; whereas another, because it is very detailed using 
	concepts that are related to current technologies, current business practices, and therefore may 
	need to be updated every year or two... I'd like to know this.    (3LLQ)
	[11:29] MatthewWest: Yes, stability is an important criteria. For me that is about how much the 
	existing ontology needs to change when you need to make an addition.    (3LLR)
	[11:24] MariCarmenSuarezFigueroa: Sorry I have to go (due to another commitment). Thank you very 
	much for the interesting presentations. Best Regards    (3LLS)
	[11:28] BobSchloss: Another analogy to the world of blackbox testing... the software engineers have 
	ideas of Orthogonal Defect Classification and more generally, ways of estimating how many remaining 
	bugs there are in some software based on the rates and kinds of discovery of new bugs that have 
	happened over time up until the present moment. I wonder if there is something for an ontology... 
	one that has a constant level of utilization, but which is having a decrease in reporting of 
	errors.... can we guess how many other errors remain in the ontology? Again... this is an 
	analogy.... some way of estimating "quality"...    (3LLT)
	[11:27] MichaelGruninger: @Fabian: It would be great if we could also focus on criteria and 
	techniques that people are already using in practice with real ontologies and applications.    (3LLU)
	[11:27] SteveRay: @Michael: +1    (3LLV)
	[11:28] FabianNeuhaus: @michael +1    (3LLW)
	[11:29] LeoObrst: Perhaps the main difference between Intrinsic -> Extrinsic is that at least some 
	of the Intrinsic predicates are also Extrinsic predicates with additional arguments, e.g., Domain, 
	Requirement, etc.?    (3LLX)
	[11:30] LeoObrst: Must go, thanks, all!    (3LLY)
	[11:31] PeterYim: wonderful session ... really good talks ... thanks everyone!    (3LLZ)
	[11:31] PeterYim: -- session ended: 11:30 am PST --    (3LM0)
	[11:31] List of attendees: AlanRector, AnatolyLevenchuk, AngelaLocoro, BobSchloss, BobbinTeegarden, 
	CarmenChui, DaliaVaranka, DonghuanTang, FabianNeuhaus, FranLightsom, FrankOlken, GaryBergCross, 
	JackRing, JoelBender, JohnBilmanis, KenBaclawski, LalehJalali, LeoObrst, MariCarmenSuarezFigueroa, 
	MaryBalboni, MatthewWest, MaxPetrenko, MeganKatsumi, MichaelGruninger, MikeDean, MikeRiben, 
	OliverKutz, PavithraKenjige, QaisAlKhazraji, PeterYim, RamSriram, RichardMartin, RosarioUcedaSosa, 
	SteveRay, TerryLongstreth, TillMossakowski, ToddSchneider, TorstenHahmann, TrishWhetzel, vnc2    (3LM1)
 -- end of in-session chat-transcript --    (3L39)

Additional Resources:    (3L3G)

For the record ...    (3L3N)

How To Join (while the session is in progress)    (3L3O)

Conference Call Details    (3L18)

Attendees    (3L25)