On 21/04/2011 9:50 AM, John F. Sowa wrote:
> Barry, Azamat, et al.,
> The note copied below was addressed to the Ontology Summit list.
> But it addresses important issues that should be discussed in the
> wider forum. I presented some related slides there on Tuesday:
> The concluding slide 6 advocated automated methods for relating
> ontologies to one another and for extracting ontologies from
> legacy software and from natural language texts. This morning,
> I added some pointers to suggested readings for further detail.
>> The mappings I know of between ontologies in practical use
>> (for example between different anatomy ontologies) involve very
>> costly manual effort, and even then they are still imperfect
>> (and fragile as the mapped ontologies themselves change).
> I agree.
> Even worse, inter-annotator agreement among professionals who use
> the ontologies (and the related terminologies) is very poor. At
> the Ontology Summit, I was discussing the issues with a physician
> who cited a discouraging result: agreement between any two
> ophthalmologists who assign SNOMED codes to a set of cases is
> about 60%.
> The annotators don't even agree with themselves. In the study,
> the experimenters retested exactly the same ophthalmologists
> a year later on a subset of exactly the same cases. For each
> of the "experts", their new answers had about a 60% agreement
> with their answers the year before.
This is why IT folks are big on standards.
We would constantly revise our own algorithms and data representations
if left on our own.
We complain that the standards are out-of-date, too inflexible and not
nearly as good as our own method/data structure but we insist that our
vendors and fellow practitioners follow them.
Standards are not free and sometimes take a long time to evolve. How
many billions were spent on the network standards and implementations
that were replaced by TCP/IP after 40 years of searching for consensus? (01)
> This is the fatal flaw in any system that depends on human experts
> to link real-world data to formal definitions. Unique identifiers
> of formal definitions are hopelessly unreliable in any system that
> depends on human annotators to select an option from a menu.
>> Can John point to examples of practically useful mappings created
>> and updated automatically through appeal to some sort of Lindenbaum
>> lattice-based technology?
> Yes, indeed. Every *correct* alignment of any two ontologies that
> has ever been done by human or machine is a successful application
> of the mappings shown in a Lindenbaum lattice.
> The lattice is actually a very simple structure that can be
> specified on one page. It is the formal foundation for every
> method of theory revision or ontology alignment.
> The lattice is like arithmetic. People were counting on their
> fingers long before Peano stated his axioms. The theory doesn't
> say that counting on fingers is bad, but it can distinguish sound
> methods from flaky ones. Furthermore, it can provide guidelines
> for designing automated and semi-automated tools that can be
> much faster and more reliable than finger exercises.
>> Interoperability is a critical idea needing depth and breadth
>> and common foundation framework.
> At that level of detail, we agree. But the framework must support
> existing systems, future systems, and people at all levels of
> education. (And even experts in one field are novices in others.)
> 1. There are trillions of dollars of legacy software that run the
> world economy. It won't be replaced for a long, long time.
> 2. Anything that replaces a legacy system has to interoperate with
> it during a long period of transition. In fact, most systems
> that replace a legacy system build on and extend the implicit
> ontology in the old system.
> 3. Anything that depends on people using unique identifiers must
> address the problem that even experts in a subject can't agree
> on what codes or categories to assign.
I agree that organizations will be forced to deal with multiple
overlapping ontologies imposed by various stakeholders.
I have very little faith that the US Treasury, the international banking
standards community, the bond rating services and the Justice Department
will ever agree on a formal ontology describing all possible financial
derivative instruments that can be sold in the US or purchased from
foreigners by US entities. Each will develop an ontology to satisfy
their view of the universe and traders will have to determine how their
business processes and their understanding of the financial universe
maps into each of these other views. (02)
The tools to make this possible will be highly valued.
They will also be incredibly easy for a CFO, accountant or policy
analyst to use.
The ones that are not, will not get used and hence have no value. (03)
> -------- Original Message --------
> Subject: Re: [ontology-summit] Official Communique Feedback Thread
> Date: Wed, 20 Apr 2011 10:39:51 -0400
> From: Barry Smith
> To: Ontology Summit 2011 discussion<ontology-summit@xxxxxxxxxxxxxxxx>
> On Wed, Apr 20, 2011 at 9:50 AM, John F. Sowa<sowa@xxxxxxxxxxx> wrote:
>>> ... having one single ontology does not solve the problem. actually
>>> IMHO it does not solve anything. it could probably be a good idea to
>>> address the issue of interoperability across ontologies rather than
>>> pretending to have "one ontology per domain".
>> Yes, indeed.
>> There are already a huge number of implemented and proposed ontologies,
>> and the largest number of potential ontologies comes from the trillions
>> of dollars of legacy software. The total number is finite, but it is
>> sufficiently large that infinity is the only practical upper bound.
>>> Who will keep the N-squared mappings up to date, for an N that is
>>> increasing, if AGC gets his way, without limit? Who will pay for this
>>> ever increasing mapping effort? Who will oversee the mapping effort?
>> The only reasonable solution is to provide automated methods for
>> discovering the mappings. Adolf Lindenbaum showed how to do that
>> over 80 years ago -- it's called the Lindenbaum lattice.
>> For a brief survey, see Section 6 and 7 of the following paper:
> It would be nice, if it worked. But in practice, at least in the areas
> with which I am familiar, it doesn't. The mappings I know of between
> ontologies in practical use (for example between different anatomy
> ontologies) involve very costly manual effort, and even then they are
> still imperfect (and fragile as the mapped ontologies themselves
> change). See e.g. the papers by Bodenreider (who does the best work in
> this field) listed here:
> (and especially the items co-authored with Zhang).
> Can John point to examples of practically useful mappings created and
> updated automatically through appeal to some sort of Lindenbaum
> lattice-based technology?
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J (06)