[Top] [All Lists]

[ontolog-forum] FW: Automated Ontology Mapping

To: "Ontolog-Forum-Bounces" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Sean Barker" <sean.barker@xxxxxxxxxxxxx>
Date: Wed, 25 Mar 2009 10:47:26 -0000
Message-id: <OOEEJGAPCAJOKOFFPHLHIEBICAAA.sean.barker@xxxxxxxxxxxxx>

 Bart,    (01)

        I am afraid that, from an industrial perspective, such work
is viewed with some scepticism. We do not use ontologies as
abstract specifications, but as decision mechanisms for concrete
processes. This poses two sorts of problem.    (02)

Firstly, the meaning of many critical terms is determined by the
process. Historical experience in data exchange has shown that even
where two companies use the same terminology to mean very similar things,
data exchange is dependent
on process alignment. For example, the term "issued" (as in "an issued
drawing") refers to the key maturity gate in a design process. In one
organization this may simply mean the design is complete, and the CAD
model has been checked by the relevant design departments, in another,
that the manufacturing planning has been completed, along with the
manufacturing process plans, models for condition of supply and
condition of spare, etc. Such a mismatch in process definition has a
significant cost impact, and cannot be resolved without realignment of
the business processes.    (03)

Secondly, a significant number of terms are comprehension level terms,
rather than perception level terms. That is, the comprehension level
term is the result of some weighted classification process - weighted in
the sense that some perception level terms may be ignored. Again, such
classification procedures are organization dependent, and two
organizations may classify the same set of perceptions in markedly
different ways. For example, the emergency response codes for fire,
police and ambulance are marked different, since the missions of the
organization are quite distinct. The 7/7 London transport bombings were
a "major" incident for the fire and rescue service, but not for the
ambulance service, since they did not add significantly to its overall
workload. Incidentally, in the UK, for the police the term "major"
implies that they have a statutory duty to set up an incident response
centre to deal with phone calls from the public.    (04)

My view is that the industrial usage of an ontology will be based on a
risk assessment of its reliability. In particular, if a partial
misunderstanding of a term has a significant impact, then the ontology
will not be used unless there is an appropriate risk mitigation action -
typically a due diligence process that will confirm the understanding of
the terms. One wouldn't buy an aircraft part on the internet without
doing a whole series of quality checks.    (05)

The automatic mapping of ontologies seems to actually increase the risk.
Or rather, I can see a role for automatic
mapping as a way of identifying incompatibilities between ontologies,
but I cannot see it as a way of safely identifying equivalents.    (06)

It may be that the use cases you are considering can accept this level
of risk (you do have use cases?), in which case good luck. But it would be a
helpful measure of the usefulness of this research if you were also to
investigate the effect on risk (or at least the probability of error) of
such automatic methods. That is, from my viewpoint the question is not can
automatically map between ontologies, but what effect does an automatic
have on the risk involved. This is an ontology engineering question, rather
an ontology science question.    (07)

Sean Barker
Bristol, UK    (08)

               *** WARNING ***    (09)

This mail has originated outside your organization, either from an
external partner or the Global Internet.
     Keep this in mind if you answer this message.    (010)

Hello,    (011)

This is my first posting on this forum, so let me introduce myself.
My name is Bart Gajderowicz, and I'm a graduate student at the Computer
Science Department at Ryerson University, in Toronto, Canada.    (012)

I am researching automated ontology mapping, and have compiled several
options which span the different categories currently being developed.
 Let me formally define these as per (Choi, et al 2006). Based on this
work, I am currently looking at the several fields/ideas to introduce
partial consistency to automate the mapping process.    (013)

I welcome any comments, corrections, suggestions, criticisms from the
forum on the following analysis.    (014)

Some of the current techniques take a deductive approach, and
concentrate on the structures, axioms, and hierarchies of the ontology.
Others take an inductive approach, and look at instances in order to
derive what objects are being modelled.  Others still are a hybrid of
the two.  At this point I'm looking at both approaches, to see which
direction is more appropriate for my research.    (015)

I'm currently concentrating my efforts on ontologies defined by first
order languages such as Common Logic. I'm  doing this to take advantage
of provers and inference engines, but also to limit my domain to these
languages, so compatible representation becomes less of a headache.    (016)

At this point, I would also like to concentrate on structural and
taxonomic similarities, with a limited amount of lexical similarity
measures.  Preferably, no natural language processing would be performed
on terms at this time.  This may limit my ability to perform schema or
semantic mapping.  In FOL, however, I have the ability to apply
unification techniques on a set of axioms, to align free and bound
variables.    (017)

Because ontologies may, and will define the same concepts differently
(depending on the context, a subject matter expert's knowledge, their
modelling approach, etc.), checking for consistency will be necessary,
but also tricky, which I realize is a serious understatement.  Most
systems currently in place have some level of manual verification, but
if a system's domain is to expand to a large repository, with an active
community contributing often, a fully automated system would be
desirable.  This can naturally be extended to the web, and semantic
classification of documents.  My task now is to see how far an algorithm
can go before some level of partial consistency is introduced, and how
this introduction is executed.    (018)

To that end, I am currently looking at the following fields/ideas to
introduce partial consistency:    (019)

1) A similarity measure can be evaluated by looking at properties such
as isomorphism, injection, surjection, associativity, commutativity, and
distributivity between some classes, but not others, in different
ontologies.    (020)

2) Perhaps Prototype Theory will allow me to formally define some key
terms, as a type of local upper ontology.  In OntoClean (Guarino, et al
2004), for example, the first process was to define a "backbone taxonomy
of terms" to which all other terms were grounded.    (021)

3) Fuzzy Logic may also be suitable, by creating membership functions
which include some sets of axioms attributes, or properties, but not
others, and overlapping relations may be true to some degree.  As was
pointed out to me, these would define a maximum entropy, which is used
in the data mining field, not logic.  This may still be worth looking
at, for inductive analysis.  Perhaps a redefinition of inconsistent FOL
clauses to Fuzzy Logic could be done in a such a way that intended
meaning of axioms are persevered, and theories are weakened to
accommodate the discrepancies.  This approach would greatly limit the
decidability of theories, but if key axioms are pointed out, with
corresponding degrees, perhaps this limitation is worth the gains.
GLUE, for example, uses a Naive Bayes  learner to analyse statements of
ontologies.    (022)

4) Perhaps adding Modal Logic into the mix, and saying that some set of
properties define a concept now, but not at a later time, could be a way
of quantifying differences between concepts in different ontologies.
Think of a Wikipedia entry where the definition of a term may change
with popular opinion or when more information becomes available.    (023)

5) Contexts may be generated where some axioms are added or removed.    (024)

References:    (025)

Choi, N., Song, I.-Y., Han, H., A survey on ontology mapping (2006),
SIGMOD Record, 35 (3), pp. 34-41.    (026)

Guarino, N., Welty, C., Evaluating ontological decisions with ontoclean
(2002), Communications of the ACM, 45 (2), pp. 61-65.    (027)

Thank you.
Bart Gajderowicz
MSc Candidate, '10
Dept. of Computer Science
Ryerson University
http://www.scs.ryerson.ca/~bgajdero    (028)

********************************************************************    (029)

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (030)

<Prev in Thread] Current Thread [Next in Thread>