ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Next steps in using ontologies as standards

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Patrick Cassidy" <pat@xxxxxxxxx>
Date: Tue, 6 Jan 2009 23:45:37 -0500
Message-id: <024301c97082$c8e0b9c0$5aa22d40$@com>

A brief reply to two of Neil’s points:

>> [NC] It seems illogical to me to try to capture all knowledge in a single ontology, just as it is ridiculous to capture all facts about a domain in a single flat-file "database".

 

[PC] Agreed.  That’s why I am suggesting that the most productive way to think of a foundation ontology is as the set of primitives that can be used to specify the meanings of the ontology elements used in any application.  The applications can be divorced from each other, highly specialized, and highly diverse.  But when the applications need to share stored information, the description of that information using a common “defining vocabulary” (the foundation ontology) will allow the machines to automatically and accurately interpret the intended meaning of the information, and use it appropriately.  The set of primitives will be very much smaller than the number of datatypes or relations described using those primitives.

 

>> [NC] - With so many viewpoints of an ontology's construction and purpose: Pick one benefit and push the construction methodology to the limit to further that particular benefit--perhaps some other natural benefits may fall out as side effects.

[PC] This relates to the need for some “killer app” to demonstrate an ontology’s utility.  I think that one such application would be a good interactive natural language interpreter that can be used for various purposes, such a NLP interface to data, help functions, semantic search.  The number of potential applications for a common computer language may be comparable to the number of ways people  use human languages.  But developing a natural language understanding system may well take very much more effort than developing a common foundation ontology.  So it is not likely  to *precede* development of a common FO.

     Interestingly, having a common FO could greatly accelerate the development of a powerful natural language interpreter, by serving as the common standard of meaning through which different modules of a language interpreter could communicate the results of text processing by many different methods.  Statistical and rules-based methods can communicate, and also use highly specific “experts” that are tuned to specific words, phrases, or grammatical structures.  By use of a common foundation ontology as the standard of meaning, a powerful system that exceeds the capabilities of individual groups could be developed by multiple groups in a collaborative fashion, where incremental improvements in replaceable  modules can be developed to evolve a language understanding system from some promising starting point.  But for such a process to work the foundation ontology has to come *first*, and then the app can be created.  The same process could serve to permit development of other knowledge-based systems by a collaborative method allowing cumulative improvements.  The absence of such a standard impedes efforts to use the collaborative technique, because *some* standard for representing information needs to be used, and in the absence of an existing broadly used standard, local groups would have to first coordinate to decide on what standard they will use for a particular project.  This occurred in the CALO project.  The Porter ontology used there is competent, but idiosyncratic.  Who else uses it other than the participants of that project?

 

    Another “killer app” is database integration.   Cycorp is making most of their money these days from such a project (commercial, not government-funded).  Ontology Works will do integration via an ontology for you, and so will Adam Pease, using SUMO.  The problem is, that retrospectively mapping different databases to each other, even via an ontology, is very expensive,  and is cost-justified only in some cases.  To properly understand the value of agreeing on a common foundation ontology *first*, one needs to take a longer view than worrying about retrofitting existing databases.  In twenty years, it is likely that the majority of databases in use at that time will have been built after 2009.  So, proactively developing databases whose data elements are described ontologically by a common foundation ontology will allow all users of such DB’s  to share their data accurately with each other, with development of the DB no more expensive that by a traditional method of creation of new schemas.  At that point, those legacy DB’s created before the standard can still be integrated, but at a price that may not be justified.  So we can envision a “triage” situation where newly created databases interoperate from the start via the FO,  important legacy databases are mapped to the FO, and legacy databases whose need for interoperability is too little to justify the cost will never be able to interoperate – at least until they are refactored, in which case  they may be integrated at that time for little additional cost, once there is experience using the FO for DB integration.  We have been discussing interoperability issues for over fifteen years, so taking a 20-year perspective makes a lot of sense.  But again, for this scenario to play out, the foundation ontology will have to come *first*.

We can solve the chicken-and-egg problem by noting that dinosaurs were laying eggs long before they evolved into chickens, so actually the egg  came first.  And I believe that the foundation ontology will have to come *first*, before the killer app is developed.

 

  In any case, considering the enormous ongoing waste due to lack of any common standard of meaning for information, spending $30 million in a serious effort to explore one possible solution seems like a very worthwhile investment.  And if other direct approaches (not waiting for some evolution over decades or centuries) are proposed, they  should be funded too.

 

Pat

 

Patrick Cassidy

MICRA, Inc.

908-561-3416

cell: 908-565-4053

cassidy@xxxxxxxxx

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Neil Custer
Sent: Tuesday, January 06, 2009 6:26 PM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Next steps in using ontologies as standards

 

Dr. Tolk, As an aside, I am typically very optimistic about the advance of technology in general.  I've worked for the federal government (military) for 28+ years in IT.  In my opinion if you are waiting for "... strong leadership by managing organizations .... in particular government organizations when it comes to spending tax dollars to the maximal benefits of the people, and not just one project.", then you are a bigger optimist than anyone on the forum.  I agree that lack of leadership and knowledge by PMs and IT leadership in the government has been questionable at best for a long, long time.  Perhaps someone here thinks Obama's new government CTO will be our messiah... but I doubt it ;-)

To the rest of the forum:  I have been standing on the sidelines in the forum long enough to see that it appears even the most respected of ontology experts such as yourselves can't agree what the right direction might be to find the holy grail of knowledge capture and reuse.  My humble opinions rarely get even a grumble (perhaps because I'm not a Philosophy PhD that has built ontologies for years, but don't fault me for that--I can still follow your discussions) but I think intelligent people with original ideas can spark a solution if those with the implementation intelligence will listen.  Having said that, I agree with Mr. Wheeler's supposition that there has been little in the "killer app" department to show the true benefits of a foundation ontology, and when even those in a field as advanced as the life sciences community has trouble agreeing on ontological formats, primatives, and what have you, then perhaps the approach needs to be rethought. 

Two ideas I'll float to see if they make any sense whatsoever:

- With so many viewpoints of an ontology's construction and purpose: Pick one benefit and push the construction methodology to the limit to further that particular benefit--perhaps some other natural benefits may fall out as side effects.

- Determine a way to express the ontology construction aspect as an ontological type based on its purpose/benefit.  Then determine methods for these to interact (or more particularly, describe the relationships between them).  It seems illogical to me to try to capture all knowledge in a single ontology, just as it is ridiculous to capture all facts about a domain in a single flat-file "database".  My thinking is that when a single ontological discourse can be captured in something as basic as a table in a database and can be related to other tables in a knowledge domain as easily as building primary keys between tables in a database, then the ability to use the information contained in a set of domain ontologies will take off at an unbelievable pace.

I've been exposed to teams that have been building enormous XML schemas with the intent of modeling all possible uses for all of the data they may want to exchange in an enterprise and the end result is so floppy that it is basically meaningless in terms of the possible descriptive capabilities of XML.  I perceive a similar situation has risen in this forum for trying to find an ontology approach that meets all knowledge engineer's needs and is hitting up against this same conundrum.

On Tue, Jan 6, 2009 at 2:47 PM, Tolk, Andreas <atolk@xxxxxxx> wrote:

PRIMITIVES OF MEANING

I think that this question of "primitives of meaning" is very important, although I do not believe that we will be able to identify such "atomic expressions of meaning" easily. One of the problems is that such atomic expressions may not represent concepts but will only be properties of concepts that gain their meaning from the contexts in which they are used. This leads to the challenge of multi-scope, multi-resolution, and multi-structure models. While the challenges of differences in scope (different concepts are represented) and resolution (different levels of resolution are used) are self-explaining, the challenge of structure is often not perceived.
Chuck Turnitsa and I introduced the example of number- and letter-world. Given, e.g., four properties A1, A2, B1, and B2, number-world uses numbers as the identifying category and uses A1 and B1 to identify concept "1" and A2 and B2 to identify concept "2". Letter-world identifies concept "A" using A1 and A2 and concept "B" using B1 and B2. While both worlds have different concepts, they use the same properties to characterize them. For number- and letter-world, the primitives of meanings are these properties.
The problem boils up when we add more models that may introduce additional levels of resolution. What if in this third model the resolution is higher and the properties become concepts? These observations motivated the thesis that primitives of meaning are context specific and will be comparable to the idea of the "highest common factor / greatest common divisor." If this thesis is true, the primitives of meaning are not easily standardisable in multi-resolution environments. They are valid in a federation of models as long as no model with a higher resolution is introduced. We introduced the idea of a common reference model that is enhanced (increasing the resolution of properties) or extended (adding new properties) using engineering principles. A guess this comes very close to the ideas of a "foundation ontology."

Long story short: I really believe that this is an interesting question and needs to be evaluated with rigor. We may find that there is no general solution, but many practically applicable special solutions, as pointed out by Pat. I agree.

KILLER APPS AND STANDARDS AND CONSTRAINTS

Working in the military Command and Control realm for several years - and in support of multiple nations - my perception may be blurred by some business domain specific constraints, but my experience is that without incentives supporting the use of the standard or real disadvantages when not using the standard (or both) industry partners will always try to bring in their special and often proprietery solutions.
Real education is needed, in particular for project managers and their managers. As rightfully pointed out in this discussion before: if only the community benefits, but the contributing projects pay (without incentive), it is not going to happen.
I have become a pessimist regarding self-emerging standards ... I think that strong leadership by managing organizations is needed, which includes in particular government organizations when it comes to spending tax dollars to the maximal benefits of the people, and not just one project.

==================== ;-)
Andreas Tolk, Ph.D.
Associate Professor Engineering Management & Systems Engineering
Old Dominion University, Norfolk, VA 23529
Voice 757-683-4500 Fax 757-683-5640

 


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>