ppy/chat-transcript_unedited_20110217a.txt Chat transcript from room: ontolog_20110217 2011-02-17 GMT-08:00 [09:17] PeterYim: . Welcome to the OntologySummit2011: Panel Session-4 (Track-3) "Value Metrics, Value Models and the Value Proposition" - Thu 2011_02_17 Summit Theme: OntologySummit2011: Making the Case for Ontology Session Title: Value Metrics, Value Models and the Value Proposition - Take I Session Co-chairs: Dr. ToddSchneider (Raytheon) & Mr. RexBrooks (Starbourne) Panelists: * Mr. RexBrooks (Starbourne) - "Introduction" * Mr. KurtConrad (Sagebrush) - "Business Value Alignment to Support Ontology Development" * Mr. RexBrooks / Mr. ChristianFillies (Semtation GmbH) - "Ontology Integration" * Ms. MaryBalboni (Raytheon) - "Ontology Performance" * Dr. JohnYanosy (Rockwell Collins) - "Ontology and Business Value" * Dr. ToddSchneider (Raytheon) - "Ontology Use Maintenance" . [09:22] PeterYim: please refer to session details at: http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2011_02_17 [09:26] SteveRay: All Hail Watson! [09:28] NicolaGuarino: Hi everybody [09:29] RexBrooks: Hi All [09:30] anonymous morphed into BruceBray [09:30] SteveRay: @Rex: Your voice is often fuzzy and faint. Is there something you can improve on your end? [09:32] Pat Barkman: hi [09:34] LeoObrst: Hi, all. My old bearded collie was named Watson, and so I was rooting against the humans. ;) [09:34] Bob Smith: Hello, and Rex, your voice is still fuzzzy [09:35] anonymous morphed into Alex Mirzaoff [09:39] SteveRay: Cheat sheet: *3 to un-mute; *2 to mute [09:40] vnc2: session starts [09:40] ToddSchneider: Rex needs to speak up. [09:41] vnc2: == RexBrooks - introduction == [09:42] LeoObrst: @Rex: we can barely hear you! [09:46] anonymous morphed into PavithraKenjige [09:48] anonymous1 morphed into KurtConrad [09:48] KurtConrad: I'm here [09:48] SteveRay: Cheat sheet: *3 to un-mute; *2 to mute [09:49] SteveRay: Can't hear you Kurt [09:49] SteveRay: @Kurt: Very faint and fuzzy [09:50] TerryLongstreth: @Kurt: please speak directly into the microphone [09:51] anonymous morphed into RamGouripeddi [09:51] SteveRay: @Kurt: Almost unintelligible [09:52] AlanRector: louder please [09:52] Pat Barkman: yes ... sound is muffled [09:53] RexBrooks: Can't hear Kurt. [09:54] MichaelGruninger: @Kurt: What are the agents in "agent-specific alignments"? [09:55] YuLin: peter's voice is perfect [09:56] PavithraKenjige: Actually I could hear you, but a little low voice [09:56] MichaelGruninger: thanks! [09:56] anonymous morphed into aJohn Yanosy [09:56] aJohn Yanosy morphed into John Yanosy [09:56] SteveRay: We're on slide 4 [09:58] RexBrooks: Kurt ia getting faint again. [09:59] PeterYim: please speak up and slow down if you can, Kurt [09:59] SteveRay: Maybe it's just me, but if I were a business person trying to decide whether to invest in an ontological approach, I would be very lost by now. [10:01] John Yanosy: This slide is very informative with respect to understanding knolwedge sources, not sure how to incorporate tacit knolwedge into ontologies [10:01] Pat Barkman: slide 8? [10:02] SteveRay: I think he means slide 7 [10:03] Pat Barkman: OK ... I see he's on slide 7 content [10:04] SteveRay: Really clear, Rex! [10:04] SteveRay: Go ahead. We can use the downloaded version as well. [10:06] anonymous morphed into Sarah Goldman [10:06] LeoObrst: @Kurt: there is also a technical notion of "dynamic semantics" for natural language semantics that goes back to Kamp, Heim in the early 1980s and Groenendijk and Stokhof in 1990, 1991, that has subsequently been developed by others. [10:08] Pat Barkman: novice here ... anyone want to state what "common tool" he's talking about? Semaphore/smartlogic? [10:09] SteveRay: Not sure why the name wasn't mentioned. It could have been a UML tool like Enterprise Architect, MagicDraw, or Rational Rose. [10:09] Pat Barkman: ahhh ... thanks Steve [10:13] SteveRay: Still anxious to start hearing suggested metrics...cost? development time? capability? system performance? [10:15] Pat Barkman: I have the same questions, Steve ... cost/benefit breakdowns [10:15] SteveRay: By the way, Todd was very clear just now on the phone. [10:17] RexBrooks: @Pat Sorry, SemTalk uses Visio but adds semantic capabilities to it. I happen to be involved with SemTalk USA and wanted to avoid the appearance of attempting to sell a product. [10:17] Pat Barkman: thanks Rex [10:19] SteveRay: We're on slide 4 [10:19] RexBrooks: @Steve, Since I knew there was all that coming, I felt free to deal with expectations and focusing on the Value Proposition more than the Value Metrics and Models. [10:19] RexBrooks: @Steve, don't worry, its coming. [10:19] SteveRay: OK [10:24] RexBrooks: @Pat, as far as modeling tools, SemTalk makes Visio into a real modeling tool, at least for BPMN and several other specific kinds of specialized analytics. It can output XMI and that allows me to import it into Enterprise Architect that then gives me all the UML I need and then some. Both SemTalk and EA output images of diagrams and generate code and have dedicated back end databases. i'm still working on getting some of the conceptualizing into Compendium. [10:24] ToddSchneider: Steve, are we getting closer? [10:24] RexBrooks: hehe [10:25] Pat Barkman: @Rex ... thanks that helps. [10:25] PeterYim: @MaryBalboni - ref. your slide#5, is there a maturity model for ontology that you are using currently? [10:25] ToddSchneider: Peter, I don't think so. [10:25] Pat Barkman: ... and yes I'm luvin' the MA here great job Mary! [10:27] RexBrooks: @Peter I am pretty sure there isn't one that has much support yet. [10:29] PeterYim: ref. maturity model, it may be nice to compare what is out there ... e.g. ref. LeoObrst's OMM which he posted to the [ontology-summit] list yesterday (http://ontolog.cim3.net/forum/ontology-summit/2011-02/msg00061.html ) - which is at: http://ontolog.cim3.net/file/work/OntologySummit2011/reference/ontologyMaturityModel-obrst-2009.pdf [10:29] SteveRay: @Todd: Yes, I'm breathing a bit easier. So far I'm hearing Performance, and Cost, with some good expansion of each. The reason I'm asking is because I'm thinking of how this will integrate into the other tracks - Use cases, and Application framework. [10:29] ToddSchneider: Steve, understood. [10:31] SteveRay: Not sure I would conflate risk and performance... [10:32] John Yanosy: great job [10:32] SteveRay: Great talk by Mary - brings out some good metrics. [10:33] Mary Balboni, Raytheon: thank you [10:33] Pat Barkman: yes ... I think that's a jump as well Steve, but given the maturity/prevalance of ontology implementations ... it's a fair stepping stone in MA [10:33] Ramdsriram: Are there any case studies (with data) out there which describe how ontologies improved a system performance? [10:33] ToddSchneider: Steve, I'm back I hit the wrong button. [10:33] John Yanosy: Todd, I will have to be leaving soon for a customer meeting, approx 15 minutes [10:33] ToddSchneider: All, we have to save questions. John Yanosy has a hard stop time. [10:34] RexBrooks: It's almost time to introduce John. [10:34] PeterYim: @Todd ... are you doing Q&A after all the panelists have finished their presentations? [10:34] Pat Barkman: thanks Mary ... great job ... will be referencing your work in mine, so thanks [10:35] PavithraKenjige: thank you mary [10:35] Mary Balboni, Raytheon: thanks everyone - glad to be part of this group [10:37] ToddSchneider: Peter, yes. [10:38] LeoObrst: Concerning ONTOCOM, which is basically an ontology cost model, it identifies as cost drivers: Building, Reuse, Personnel, and Project, with sub-categories for each of these. Example: Building: Domain Analysis Complexity, Conceptualization Complexity, Implementation Complexity, Instantiation Complexity, Required Reusability, Documentation Needs. Etc. It also has developed a spreadsheet with these factors, and one can compute the estimated cost based on their data and factor weights. [10:38] SteveRay: I interpret John's presentation as addressing Capability as the metric in question. [10:40] SteveRay: It's very hard to quantify Capability in the abstract, but in specific examples they could be enumerated. [10:40] ToddSchneider: John is providing a operational view. [10:41] RexBrooks: John's work is particularly useful in the SOA context, especially the emerging ecosystem view. [10:42] RexBrooks: But the evaluation metrics need to be fleshed out a bit. [10:42] SteveRay: Slide 4 has some good raw material for metrics. [10:43] Alex Mirzaoff asked for a victim, I choose... BruceBray [10:43] RexBrooks: It's difficult to measure inferencing except for accuracy, e.g. internal consistency with its own definitions. [10:44] ToddSchneider: Rex, the performance of inferencing should be measurable. [10:44] Pat Barkman: Thanks John [10:44] RexBrooks: We'll get there eventually Todd, [10:45] Bobbin Teegarden: Is part of the value of inferencing in code NOT written (and associated 'costs')? [10:46] John Yanosy: Your welcome and sorry for being able to provide more detail, but will be positng more material on the wiki and think the previous metrics would be interesting to apply to these business areas. Great job and will be more active in future. Good by thanks [10:46] Mary Balboni, Raytheon: Thanks Todd - :) [10:47] PeterYim: == ToddSchneider presenting == [10:50] SteveRay: @Todd: Just because the consequence of error is different, it doesn't follow the METRIC is different, just that the value and weighting of that metric may be different. [10:52] RexBrooks: There are some real misunderstandings about the differences between qualitative and quantitative metric and the relationship between them, but that's almost a topic of its own. I come from an Advertising background and we had to provide quant. to satisfy the customer's need to rationalize while appealing to unstated qual. that are often the actual driving motivation to purchase or not in the marketplace. [10:52] LeoObrst: As part of the DARPA HPKB (High Performance Knowledge Bases) and RKF (Rapid Knowledge Formation) programs, large ontology integration efforts to solve a command and control/situational awareness problem, the Program Manager Murray Burke (and before him, Dave Gunning) tried to capture ontology axiom reuse metrics. [10:54] Bobbin Teegarden: @Leo Results of that reuse work? Any references onlie? [10:55] Bobbin Teegarden: online [10:56] MikeBennett: Does this mean that it would be possible to define a quantitative difference between ontologies with losts of "Equivalent class" links versus ontologies which make use of high level patterns or archetypes? Could this make the case for better use of sharing and integration ontologies? Just a thought. [10:56] SteveRay: OK. So far I have heard the following classes of metrics: Cost, Capability, Performance, Quality, System Complexity. Any others? [10:58] SteveRay: People can un-mute themselves. *3 to un-mute; *2 to mute [10:58] Mary Balboni, Raytheon: Maturity related to the depth of model and breadth of validation ... [11:00] SteveRay: @Mary: Wouldn't maturity manifest itself to a customer in terms of capability? In other words, I would put the maturity of a model as a property that feeds in to the capability metric. [11:01] SteveRay: @Mary: Put another way, I would imagine that a customer isn't as interested in the maturity of an embedded ontology itself, but rather in how that might affect performance of their system. [11:01] Mary Balboni, Raytheon: @SteveRay - more capability may be part of maturity - or could be more correctness while operational --- also had thoughts on Complexity can be measured like the old Halsteads measures perhaps... [11:02] SteveRay: @Mary: OK. I accept that model maturity can manifest itself through several classes of metrics, including correctness (which think of as a specialization of the Quality metric). [11:03] Mary Balboni, Raytheon: Operators and operands were counted and an assumption was made on how complex code was based on those numbers - not sure if Ontology could be discected in such a way [11:03] PeterYim: unlike other maturity models, the Capability Maturity Model for Software Engineering (SEI CMM) for example, in Ontology, "more mature" may correlate with "more sophisticated, and better grasp of semantics (stronger semantics) in the system" and may not correlate directly with performance, much less effectiveness, or even whether it is appropriate. ... Therefore: (a) mapping metrics to the application framework may be necessary, (b) different metrics should be expected at different level of Ontology Maturity (as how the the applications are implemented would be radically different.) [11:03] SteveRay: I'm trying to separate in my own mind the distinction between intrinsic measures of an ontology, versus extrinsic metrics of business value which will be the ones that a customer or decision maker will be evaluating when being pitched. [11:04] RexBrooks: Just to let you know, I'm not hands free if I want to be able to be hyeard if I need to respond to a qustion. It's like having one hand tied behind you back, and if definitely degrades my performance. ;) [11:06] RexBrooks: I'd be interested to hear how people think we can measure inferencing. [11:06] Alex Mirzaoff: by success of the inference? [11:07] SteveRay: @Rex: Also by maximum compute time [11:07] RexBrooks: What happens if the inference extends over different systems that may different definitions in different cases? [11:07] PavithraKenjige: How Are these different than system development life cycle? [11:08] Bobbin Teegarden: Measure value of inference in terms of equivalent code it would take to do the same thing in code (as a 'savings' or negative cost)? [11:09] Mary Balboni, Raytheon: cost avoidance [11:09] Bobbin Teegarden: No, genuine 'savings'? [11:09] RexBrooks: @Bobbin: Excellent. Never thought of that. Point is, we need lots of use cases to check against. [11:11] MikeBennett: On CMM, there is also a "Data Management Maturity Model" in development by the EDM Council, in collaboration with Carnegie Mellon as owners of the CMM model. This is still in very early stages of development, so there is potential to input to this with metrics for ontology maturity if / when these are defined. [11:11] Pat Barkman: but I think there are intrinsic benefits in using CMMi (MA/PPQA) in a before/after comparision ... an existing implementation that gets an ontology added to the architecture [11:11] SteveRay: I completely concur with the points just made by Michael Gruninger. [11:12] PeterYim: +1 [11:13] RexBrooks: +2 [11:14] PeterYim: @MichaelGruninger - would you capture your point here on the chat for archival purposes, please [11:14] RexBrooks: @Pat: Yes, seeing the difference in those before and after results would lead to new insights, I'm sure. [11:15] AlanRector: I was just cut off. Is the line dead? [11:15] LeoObrst: @Bobbin: yes, some of it is online. See the paper Schrag, Robert, Mike Pool, Vinay Chaudhri, Robert C. Kahlert, Joshua Powers, Paul Cohen, Julie Fitzgerald, and Sunil Mishra. Experimental Evaluation of Subject Matter Expert-oriented Knowledge Base Authoring Tools. [11:15] SteveRay: @Alan: No, we're still live. [11:15] PeterYim: no ... the rest of us are still in conference, Alan [11:15] MichaelGruninger: Since we want to demonstrate the benefits of ontologies, let's first consider the benefits in a software application. We can leverage the existing approaches to software engineering by considering functional and nonfunctional requirements. For functional requirements, we need to demonstrate that ontologies can be used to deliver new functionalities. For nonfunctional requirements, we can use existing software metrics such as performance, cost, quality, maintenance. In each case, we want to compare an application without an ontology and an application with an ontology, and show that there is an improvement. [11:17] RexBrooks: Peter just put it up on the vnc! [11:17] Bobbin Teegarden: There must be some way to capture the wider comprehension, collaborative common interactions, group understanding, ease of extension, ... some of the things that make use of ontologies truly a step forward. [11:17] RexBrooks: Leo's maturity model. [11:17] Mary Balboni, Raytheon: DoDAF also is expanding their framework to concentrate on data such as the CMMI-DM - have not explored if DoDAF new version is addressing Ontology/semantics in detail [11:18] MichaelGruninger: On the other side, there are costs associated with using ontologies within an application, and perhaps these are not completely covered by the analogy to software engineering [11:18] RexBrooks: @Mary: Slowly pulling teeth along the way--I participate in DoDAF Metamodel 2 WG. [11:19] TerryLongstreth: Leo's is a very valuable start, but it doesn't address cost/value [11:20] ToddSchneider: Michael, one of those out-of-band costs/risks is the availability of experienced people to do the work [11:20] AlanRector: For us, the notions of sustainability and persistence are more relevant than "maturity" - or perhaps are some of the relevant metrics for maturity for ontologies. [11:20] Mary Balboni, Raytheon: @Rex DoDAF is a late bloomer in Data Modeling .. :) [11:20] RexBrooks: @Mary: Yup! [11:21] MichaelGruninger: Did the software engineering community ever do an analysis of the benefits of using object-oriented approaches to software design? We are perhaps facing an analogous problem ... [11:21] SteveRay: @Todd: Your point raises the additional metric of Risk (in this case, risk of not being able to maintain the system over time, for example). [11:22] Yefim (Jeff) Zhuk: Mary, did you start using ontology or in the development stage? [11:23] AlanRector: One key step for our cases is when the primary identifiers ontology moves from text identifiers that inevitably change and are language specific to "nonsemantic IDs", and when there is a sensible ID management scheme in place with the display names in annotations (usually rdf:label or one of the skos:label family. [11:24] Mary Balboni, Raytheon: @MichaelG There have been OO studies in SW Development - Rationale may have study papers .. probably biased .. but the OO giants in industry are at Rationale [11:25] Mary Balboni, Raytheon: @Yefim Jeff Not at the moment implementing an Ontology but interested in its usefulness in an IA domain [11:26] AlanRector: Maturity of "ontology technology" rather than of a specific ontology? [11:28] MikeBennett: Mills did mention one or two case studies where there were measured costs of doing it the hard way, and then using ontologies. However most of our Case Studies to date have not included explicit metrics. [11:30] SteveRay: Probably about time to wrap things up... [11:31] LeoObrst: ONTOCOM: http://ontocom.sti-innsbruck.at/. [11:31] Yefim (Jeff) Zhuk: Thanks! [11:31] Pat Barkman: Productive and informative ... great session, thanks all [11:31] SteveRay: Thanks for a good session! [11:31] Mary Balboni, Raytheon: thanks to all! [11:33] PeterYim: folks from Canada did have some hard numbers - ref. http://ontolog.cim3.net/forum/ontology-summit/2011-02/msg00017.html & 2nd presentation at: http://ontolog.cim3.net/forum/ontology-summit/2010-12/msg00002.html [11:33] PeterYim: Great session ... thank you! [11:33] PeterYim: -- session ended: 11:32am PST --