ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] FW: Automated Ontology Mapping

To: ontolog-forum@xxxxxxxxxxxxxxxx
From: Bart Gajderowicz <bgajdero@xxxxxxxxxx>
Date: Wed, 25 Mar 2009 12:00:31 -0400
Message-id: <6b20199d0903250900n619fcb5aub7af91b12fe048f8@xxxxxxxxxxxxxx>
Thanks Sean,    (01)

>        I am afraid that, from an industrial perspective, such work
> is viewed with some scepticism. We do not use ontologies as
> abstract specifications, but as decision mechanisms for concrete
> processes. This poses two sorts of problem.
....
>
> My view is that the industrial usage of an ontology will be based on a
> risk assessment of its reliability.    (02)

Some work, I think will need to be done by hand, either partially or
entirely, specifically if the level of precision is high, as it is in
manufacturing.    (03)

> Firstly, the meaning of many critical terms is determined by the
> process. Historical experience in data exchange has shown that even
> where two companies use the same terminology to mean very similar things,
> data exchange is dependent on process alignment.    (04)

This is precisely why I would like to focus on structural qualities of
classes, not lexical.    (05)

> Such a mismatch in process definition has a
> significant cost impact, and cannot be resolved without realignment of
> the business processes.    (06)

If modeled with a format like UML, for example, could the alignment of
business processes be done based on this hierarchical information?
Would the process of alignment be made simpler if the two systems were
modeled in the same language, for example the Process Specification
Language (PSL)?  If the language is the same, and if it is expressive
enough to describe the properties needed, perhaps some level of
agreement could be achieved.  I realize not all situations lend them
selves to this type of process.    (07)



>
> Secondly, a significant number of terms are comprehension level terms,
> rather than perception level terms. That is, the comprehension level
> term is the result of some weighted classification process - weighted in
> the sense that some perception level terms may be ignored. Again, such
> classification procedures are organization dependent, and two
> organizations may classify the same set of perceptions in markedly
> different ways. For example, the emergency response codes for fire,
> police and ambulance are marked different, since the missions of the
> organization are quite distinct. The 7/7 London transport bombings were
> a "major" incident for the fire and rescue service, but not for the
> ambulance service, since they did not add significantly to its overall
> workload. Incidentally, in the UK, for the police the term "major"
> implies that they have a statutory duty to set up an incident response
> centre to deal with phone calls from the public.    (08)

I believe the departmental split would be classified by different
contexts, so most terms would be different, except for perhaps upper
ontology terms.  But as you point out, even a simple word like "major"
means different things.  Using an upper ontology one might say that it
is a degree of magnitude at the higher end of the scale.  This could
be modeled by Fuzzy Logic as a quantifier.  The actions and magnitude
taken by different departments would be relative to their contexts.    (09)


> The automatic mapping of ontologies seems to actually increase the risk.    (010)

This is a good point, and some domains simply don't lend themselves to
this type of process.  Again, PSL might make this process easier, and
perhaps add a measure of certainty.    (011)

> Or rather, I can see a role for automatic
> mapping as a way of identifying incompatibilities between ontologies,
> but I cannot see it as a way of safely identifying equivalents.    (012)

One of the proposed applications of my research is just that, checking
to see if two ontologies are describing the same domain, or perhaps
the same topic, but in different contexts.  If they are to what
degree, and which axioms specifically.  Actually, this process is how
I envision this working with a repository of ontologies.  When an
ontology is added, preliminary check could be performed to align it
with another one, or at least put them in the same domain category.    (013)

>
> It may be that the use cases you are considering can accept this level
> of risk (you do have use cases?), in which case good luck. But it would be a
> helpful measure of the usefulness of this research if you were also to
> investigate the effect on risk (or at least the probability of error) of
> such automatic methods.    (014)

I think results without identifying some means of measurement are not
very reliable.  One of the tasks OntoClean was to address was this
very point.  Luckily, these are part of FOL, and implementing them
would be part of the process.  I mentioned that testing the degree of
uncertainty would be part of the classification/alignment process.
The inductive approach lends itself to this well, but it might not be
that easy with a deductive approach.    (015)

-- 
Bart Gajderowicz
MSc Candidate, '10
Dept. of Computer Science
Ryerson University
http://www.scs.ryerson.ca/~bgajdero    (016)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (017)

<Prev in Thread] Current Thread [Next in Thread>