ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Relating and Reconciling Ontologies

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Ali Hashemi <ali@xxxxxxxxx>
Date: Fri, 22 Apr 2011 13:34:09 -0400
Message-id: <BANLkTikqH4zF3Lo3p+a82Uw8cCR09PHUyw@xxxxxxxxxxxxxx>
Dear Barry, Alexander and all,

I'd like to suggest that we take a step back, and reconsider an implicit assumption in what spawned this entire discussion. It began with Brand Niemann quoting a normative statement made by Barry Smith, "Too many ontologies are creating semantic silios, we need only one ontology per domain, and can only afford one." 

This comment elicited a reaction by Alexander Garcia Castro, questioning the wisdom of such mandating. Barry countered by suggesting that it would be technologically, but mainly resource draining ("afford") to manage n-squared mappings for a set of n ontologies growing "without limit."

I would suggest that this is a problematic description of the problem space. Questioning the wisdom of a single, unique, mandated ontology is not the same as arguing for a set of ontologies growing "without limit" - this is simply a false choice. 

Before offering an alternate characterization the problem space, I'd like to make explicit a few notable issues here. 

Mandate Away Need?
One has been discussed a bit already -- namely the problems that one might encounter in enforcing a mandated, unique ontology for a field. If there are groups operating within a field with irreconcilable differences in their core ontologies, do we really want to go down the route of mandating that one is necessarily privileged over the other? Jack, Ron and others have weighed in on this matter, so I will leave it be.

Let us consider more closely the rationale that underlies this value judgement - namely that it might be too technically difficult or costly to do accommodate more than one ontology in a field. This seems to be judging the value of an ontology by the wrong (or incomplete) measure. To be fair, reconciling opposing views in legacy software or db systems is costly and cumbersome, however they were not created with the explicit intent of semantic sharing and interoperability.

Taking the previous position to its logical extreme, we might find ourselves in a position where we are throwing out views or results that are inconsistent with the core assumptions of the field. It might make sense to do so (or give up those premises) for well developed, understood and mature fields, but it seems highly irresponsible otherwise. One can envision many scenarios where there are basic observations that can be interpreted in multiple, inconsistent ways, and where the field is too nascent, unclear or what have you, to confidently pick one over the other.  We should be able to accommodate such scenarios. And mandating unique ontologies sells us short.

Rather, we should evaluate the competing interpretations (ontologies) by their actual fidelity and ability to correctly represent and model a part of the world at hand -- not based primarily on an (unfounded I would claim!) assertion that it might be too difficult to reconcile views across the inconsistencies.

Is Growth Limitless?
The second problem becomes clearer when we consider the presented false choice a bit more closely.

Why should we take for granted the assumption that the upper ontologies in a field would grow "without limit"? I offer as a premise that "camps" or "dominant schools of thought" in a given field - at worst - correspond roughly to distinct core ontologies. If this is reasonable, then a quick survey of many fields finds that the n is not "without limit" but coalesces around a handful of competing views. 

Seriously, pick any field of interest. Find and count the core research streams. Are they limitless? Are there many with even more than 10? If you find one, please let me know!

Ontologies Explicitly for Reuse
Moreover, considering the context of developing computational ontologies for sharing and reuse across a domain or field - the normative imperative that comes in accepting such a goal for ontology development itself curbs "limitless growth" and mitigates the n^2 problem. People in ontology are acutely aware of silos, and the problems that limitless growth introduces. Indeed, families of interlingua ontologies have also been suggested as a way to reproduce the hub-spoke / star method, or the n^2 web. It's not one hub or a web. What about three to five hubs?

So, I would suggest that a more accurate choice is between a single, mandated ontology (the rationale for which largely seems to be grounded in perceived, potential cost considerations - please correct me if I'm wrong), vs a limited handful that captures (at least, immediately) irreconcilable disagreements within "unsettled" fields. At worst, the growth is log(n), with an inflection point in the single digits.

Moreover, using terminology from belief revision theories, one can imagine that in such a context, the introduction of a novel conflicting core ontology should have a high cost. One way to realize this might be to assign an ordinal scale to the axioms comprising the core of the upper ontologies. Or more generally, that one should have to persuasively and very effectively demonstrate a need and show how there is a space in the problem domain/field that current efforts fail or perform poorly in.

Then, we are faced not with mapping limitless, growing n^2 mappings among competing ontologies, but developing a governance(?) framework and need / purpose driven guidelines for when to branch, evolve and accommodate novel core ontologies. 

In the context of employing computation ontologies in research, the discussion surrounding accommodating limitless growing n ontologies thus strikes me as largely fanciful... (Notwithstanding the techniques and approaches that are scalable to this level being developed by multiple research streams as John Sowa has pointed out).

Yes, we should be aware of the problems in the db's and sware systems of the past. However, we are embarking with the explicit aim of making our ontologies interoperable and reusable. This already introduces an important distinction with previous efforts which did not recognize, let alone acknowledge or attempt to tackle such problems. We can create guidelines to facilitate this process, and I'm not convinced that the mandating of a unique upper ontology for each field is the best choice - at least not as argued thus far.

Best,
Ali

-------- Original Message --------
Subject: Re: [ontology-summit] Official Communique Feedback Thread
Date: Wed, 20 Apr 2011 10:39:51 -0400
From: Barry Smith
To: Ontology Summit 2011 discussion <ontology-summit@xxxxxxxxxxxxxxxx>

On Wed, Apr 20, 2011 at 9:50 AM, John F. Sowa <sowa@xxxxxxxxxxx> wrote:
> AGC
>> ... having one single ontology does not solve the problem. actually
>> IMHO it does not solve anything. it could probably be a good idea to
>> address the issue of interoperability across ontologies rather than
>> pretending to have "one ontology per domain".
>
> Yes, indeed.
>
> There are already a huge number of implemented and proposed ontologies,
> and the largest number of potential ontologies comes from the trillions
> of dollars of legacy software.  The total number is finite, but it is
> sufficiently large that infinity is the only practical upper bound.
>
> BS
>> Who will keep the N-squared mappings up to date, for an N that is
>> increasing, if AGC gets his way, without limit? Who will pay for this
>> ever increasing mapping effort? Who will oversee the mapping effort?
>
> The only reasonable solution is to provide automated methods for
> discovering the mappings.  Adolf Lindenbaum showed how to do that
> over 80 years ago -- it's called the Lindenbaum lattice.
>
> For a brief survey, see Section 6 and 7 of the following paper:
>
>     http://www.jfsowa.com/pubs/rolelog.pdf
>
> John

It would be nice, if it worked. But in practice, at least in the areas
with which I am familiar, it doesn't. The mappings I know of between
ontologies in practical use (for example between different anatomy
ontologies) involve very costly manual effort, and even then they are
still imperfect (and fragile as the mapped ontologies themselves
change). See e.g. the papers by Bodenreider (who does the best work in
this field) listed here:

http://mor.nlm.nih.gov:8000/pubs/offi.html

(and especially the items co-authored with Zhang).
Can John point to examples of practically useful mappings created and
updated automatically through appeal to some sort of Lindenbaum
lattice-based technology?

BS




--

(•`'·.¸(`'·.¸(•)¸.·'´)¸.·'´•) .,.,

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>