I think that the perspective of context needs to be considered when determining semantic equivalence of classes between ontologies. Subtle differences in class semantics may occur because of the ontology domain or by property restrictions or by complex class definitions. Just as a cell in a living body have similar structures and inherent capabilities they may be somewhat specialized by their role and function dependent on their location and neighborhood. I suspect similar effects can occurred due to class neighbors in the different ontologies as stated. So do we have the situation of expanding the comparison among the neighbor classes in each analyzed ontology for the two classes under consideration. I think this always has to be considered. So a legal organization in one ontology versus a physical organization of parts in an inventory system in another ontology. Clearly the neighbor classes may be quite semantically different but there is still some higher level sense of the properties of organization. If these could be abstracted in some small pattern then mapping could occur but there would still be some loss of meaning at the integrating pattern level when comparedvto its use in each ontology. Context properties associated with the pattern fir each ontology may help reasoning across them.
Best regards,
John A Yanosy Jr
Mobile: 214-336-9875
There
is a serious problem with the suggested methodology:
> Two ontologies or vocabularies (for
instance FOAF and Schema.org) include definitions for the same
class (or kind) of entity e.g., an Organization,
> and as a consequence we end up with
Web accessible documents comprised of RDF statements that
describe Organizations as instances of foaf:Organization or
schemaorg:Organization.
>
> Challenge: How do we get a merged view of all the
organizations, irrespective of how they've been described
across various RDF documents?
>
> Solution:
>
> 1. Make a mapping/bridge/meta ontology that uses
owl:equivalentClass relations to indicate the fact
> of applying reasoning and inference
to the equivalence claim based on its comprehension of the
relation semantics expressed
>
> 3. Access instances of the <http://xmlns.com/foaf/0.1/Organization>
classes (e.g., by seeking a description
If you create an equivalence mapping between entities in
independently developed ontologies and try to “reason”
with it in any but the most highly restricted manner, you
will almost certainly find unintended inferences, likely
logical inconsistency, and potentially a great deal of
gibberish. When you look at the logical specifications of
entities of the same name in different ontologies, they are
often quite different, even though the intuition for the
meanings may be similar. For “organization”, for example,
some ontologies have that as a subtype of “group of
people”. In some legal jurisdictions, an Organization can
exist without any members – i.e., no people. That can lead
to logical contradictions if different definitions are
equated. I have never seen “process” defined the same way
in any two independent ontologies.
Yes, but nothing I've outlined contradicts the point you are making.
My point is that you can opt to apply reasoning and inference based
on classes (entity types/kinds) described across different
ontologies. There is no such thing as one ontology that rules all.
What you can do is opt to apply meta/bridge/mapping ontologies,
according to your situation.
In my example, I am demonstrating how understanding of FOAF and
Schema.org classes can be used to integrate instances of classes
from either ontology. Note, I specifically include examples with and
without inference & reasoning to drive home the point that is
all loosely coupled i.e., its application and use is a choice made
by system user.
If one only wants to create equivalencies and use that to
perform probabilistic or pattern-matching reasoning, that
may lead to useful results that can be helpful for the
humans who evaluate the results. But don’t expect the kind
of accuracy that would be needed to allow the computers to
make mission-critical decisions without human
intervention.
Computers cannot be left alone to mission-critical decisions for
humans. What they can do is perform a lot of the grunt work that
makes humans beings make better decisions, more productively.
Or,
if one only wants to extract some one or two properties of
an entity (e.g. the director of a film), equating “film” in
two different ontologies may work as intended. But extreme
caution is advised.
Of course this has to be done with caution. I've never perceived
ontologies as a mechanism for making computers perform tasks for
which they are ill equipped.
Computers are tools, and when used properly, extremely powerful
productivity tools for humans.
Starting a new thread based on the theme above to make
what I am trying to demonstrate clearer.
Situation:
Schema.org [1] is a collaborative effort aimed as
simplifying structured data publication to the Web. As
part of this effort, a number of collaborators have
collectively produced a number of shared vocabularies
under the "schema.org"
namespace.
In addition to what's being produced by Schema.org
there are a thousands of shared ontologies and
vocabularies that have been constructed and published
to the Web from a plethora of sources, many of these
have been aggregated by services such as LOV (Linked
Open Vocabulary) [2] which is basically accentuates
the TBox and RBox aspects of the Linked Open Data
(LOD) Cloud.
Typical Integration Problem:
Two ontologies or vocabularies (for instance FOAF and
Schema.org) include definitions for the same class (or
kind) of entity e.g., an Organization, and as a
consequence we end up with Web accessible documents
comprised of RDF statements that describe
Organizations as instances of foaf:Organization or
schemaorg:Organization.
Challenge: How do we get a merged view of all the
organizations, irrespective of how they've been
described across various RDF documents?
2. Load the mapping/bridge/meta ontology document into
a data management system that's capable of applying
reasoning and inference to the equivalence claim based
on its comprehension of the relation semantics
expressed