On 21/04/2011 1:36 PM, AzamatAbdoullaev wrote:
>> Interoperability is a critical idea needing depth and breadth
>> and common foundation framework.
>> At that level of detail, we agree. But the framework must support
>> existing systems, future systems, and people at all levels of education.
> We are in accord here.
> But any thing, product, system, service, network, or technology to be
> interoperable must be compatible with the same standard, ideally, with a
> standard ontology.
> Let me remind some key points mentioned in my last:
> 1. There are nation-level programs as EU Interoperability Framework, USA
> NIEM, or UK e-GIF.
> 2. Interoperability in general implies common standards, specifications,
> formats, categorizations and integration, unifying models and schemas.
> 3. the General Interoperability Framework (GIF) is closely connected with a
> world/domain reference model as a common foundation ontology.
> Take what closely concerns the most of the Forum: the US National
> Information Exchange Model: http://www.niem.gov/. "It is designed to
> develop, disseminate and support enterprise-wide information exchange
> standards and processes that can enable jurisdictions to effectively share
> critical information in emergency situations, as well as support the
> day-to-day operations of agencies throughout the nation."
> Its syntactic operability is to be achieved by using the XML Schema data
> model, constructs, and methods, seemingly, thus supporting existing "legacy
> systems", across all levels of the government, federal, state and local.
> However, the issue of issues how to achieve computatable Semantic
> Interoperability, among any and all communicating entities, legacy ones or
> not. Seemingly, by developing the GIF implying a fundamental set of basic
> entities and relationships, providing the semantic basis (meaning
> exchange/interpretation standards and processes) for more specialized
> domains and fields and applications.
> Given that, I am convinced that to obtain the General Semantic
> Interoperability standard, costing hundreds billions per year, means to
> develop a single world reference model, in the first place.
> Azamat Abdoullaev
Just not possible. Who gets to decide? There are too many stakeholders.
Each stakeholder will have trouble giving up a view of the universe that
has served their organization for years in order to fix someone else's
problem with this view.
We have survived an Imperial vs Metric world for 2 centuries with being
able to agree on something so clear cut.
We just make the conversions when we need to and the rest of the time we
In Canada, we measure in metric but the frequently result is something
that makes sense in inches (plywood comes in the metric equivalent of
4x8 feet sheets and no one has any idea about how big that is in metric). (01)
I have no expectation that the US Justice Department and the US
Treasury are ever going to agree on some definitions of financial
The hierarchy of objects will probably never match and will be a problem
for the people who have to define the interoperability rules for
companies who need to take their own internal view of the universe and
provide views for the external agencies that fit their hierarchies.
Try telling the EU or the Chinese that they have adopt the US Treasury's
view of the financial world. (02)
> ----- Original Message -----
> From: "John F. Sowa"<sowa@xxxxxxxxxxx>
> To: "[ontolog-forum]"<ontolog-forum@xxxxxxxxxxxxxxxx>
> Cc: "Smith, Barry"<phismith@xxxxxxxxxxx>; "Azamat"<abdoul@xxxxxxxxxxxxxx>
> Sent: Thursday, April 21, 2011 4:50 PM
> Subject: Relating and Reconciling Ontologies
>> Barry, Azamat, et al.,
>> The note copied below was addressed to the Ontology Summit list.
>> But it addresses important issues that should be discussed in the
>> wider forum. I presented some related slides there on Tuesday:
>> The concluding slide 6 advocated automated methods for relating
>> ontologies to one another and for extracting ontologies from
>> legacy software and from natural language texts. This morning,
>> I added some pointers to suggested readings for further detail.
>>> The mappings I know of between ontologies in practical use
>>> (for example between different anatomy ontologies) involve very
>>> costly manual effort, and even then they are still imperfect
>>> (and fragile as the mapped ontologies themselves change).
>> I agree.
>> Even worse, inter-annotator agreement among professionals who use
>> the ontologies (and the related terminologies) is very poor. At
>> the Ontology Summit, I was discussing the issues with a physician
>> who cited a discouraging result: agreement between any two
>> ophthalmologists who assign SNOMED codes to a set of cases is
>> about 60%.
>> The annotators don't even agree with themselves. In the study,
>> the experimenters retested exactly the same ophthalmologists
>> a year later on a subset of exactly the same cases. For each
>> of the "experts", their new answers had about a 60% agreement
>> with their answers the year before.
>> This is the fatal flaw in any system that depends on human experts
>> to link real-world data to formal definitions. Unique identifiers
>> of formal definitions are hopelessly unreliable in any system that
>> depends on human annotators to select an option from a menu.
>>> Can John point to examples of practically useful mappings created
>>> and updated automatically through appeal to some sort of Lindenbaum
>>> lattice-based technology?
>> Yes, indeed. Every *correct* alignment of any two ontologies that
>> has ever been done by human or machine is a successful application
>> of the mappings shown in a Lindenbaum lattice.
>> The lattice is actually a very simple structure that can be
>> specified on one page. It is the formal foundation for every
>> method of theory revision or ontology alignment.
>> The lattice is like arithmetic. People were counting on their
>> fingers long before Peano stated his axioms. The theory doesn't
>> say that counting on fingers is bad, but it can distinguish sound
>> methods from flaky ones. Furthermore, it can provide guidelines
>> for designing automated and semi-automated tools that can be
>> much faster and more reliable than finger exercises.
>>> Interoperability is a critical idea needing depth and breadth
>>> and common foundation framework.
>> At that level of detail, we agree. But the framework must support
>> existing systems, future systems, and people at all levels of
>> education. (And even experts in one field are novices in others.)
>> 1. There are trillions of dollars of legacy software that run the
>> world economy. It won't be replaced for a long, long time.
>> 2. Anything that replaces a legacy system has to interoperate with
>> it during a long period of transition. In fact, most systems
>> that replace a legacy system build on and extend the implicit
>> ontology in the old system.
>> 3. Anything that depends on people using unique identifiers must
>> address the problem that even experts in a subject can't agree
>> on what codes or categories to assign.
>> -------- Original Message --------
>> Subject: Re: [ontology-summit] Official Communique Feedback Thread
>> Date: Wed, 20 Apr 2011 10:39:51 -0400
>> From: Barry Smith
>> To: Ontology Summit 2011 discussion<ontology-summit@xxxxxxxxxxxxxxxx>
>> On Wed, Apr 20, 2011 at 9:50 AM, John F. Sowa<sowa@xxxxxxxxxxx> wrote:
>>>> ... having one single ontology does not solve the problem. actually
>>>> IMHO it does not solve anything. it could probably be a good idea to
>>>> address the issue of interoperability across ontologies rather than
>>>> pretending to have "one ontology per domain".
>>> Yes, indeed.
>>> There are already a huge number of implemented and proposed ontologies,
>>> and the largest number of potential ontologies comes from the trillions
>>> of dollars of legacy software. The total number is finite, but it is
>>> sufficiently large that infinity is the only practical upper bound.
>>>> Who will keep the N-squared mappings up to date, for an N that is
>>>> increasing, if AGC gets his way, without limit? Who will pay for this
>>>> ever increasing mapping effort? Who will oversee the mapping effort?
>>> The only reasonable solution is to provide automated methods for
>>> discovering the mappings. Adolf Lindenbaum showed how to do that
>>> over 80 years ago -- it's called the Lindenbaum lattice.
>>> For a brief survey, see Section 6 and 7 of the following paper:
>> It would be nice, if it worked. But in practice, at least in the areas
>> with which I am familiar, it doesn't. The mappings I know of between
>> ontologies in practical use (for example between different anatomy
>> ontologies) involve very costly manual effort, and even then they are
>> still imperfect (and fragile as the mapped ontologies themselves
>> change). See e.g. the papers by Bodenreider (who does the best work in
>> this field) listed here:
>> (and especially the items co-authored with Zhang).
>> Can John point to examples of practically useful mappings created and
>> updated automatically through appeal to some sort of Lindenbaum
>> lattice-based technology?
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J (05)