There are at least 2 problems, which often get conflated:
1)
The level of expressiveness (representation) it takes to develop the ontology you need for your domain. This is development time expressiveness.
2)
3)
The level of expressiveness (representation) it takes to efficiently reason over the ontology at runtime. This is run time expressiveness.
I put (2) in the above because what you need is a transformation process at (2), i.e., knowledge compilation, to transform the representation of (1) to (3),
depending on the application, but also depending on a global analysis of (1) (think of a global analyzing compiler as an analogy).
So the above should be:
1)
The level of expressiveness (representation) it takes to develop the ontology you need for your domain. This is development time expressiveness.
2)
Transformation of the representation of (1) to (3), i.e., knowledge compilation.
3)
The level of expressiveness (representation) it takes to efficiently reason over the ontology at runtime. This is run time expressiveness.
Typically (2) is so-called “lossy”, i.e, you lose information. E.g., when transforming FOL to Horn Logic (logic programming), you lose information. So if your
knowledge compilation is from (1) FOL to (3) Horn Logic, you lose information. Not surprising, since you are reducing expressivity.
The Description Logic (DL) thread in AI, orginated about 1985 with KL-ONE, probably the first DL and frame-based knowledge representation/ontology language,
an attempt to formally specify the ad hoc “semantic networks” that existed previously. The resulting DL thread itself rapidly became an attempt to solve both (1) and (3) using the same KR language. OWL is a partial descendent of DL.
There are systems/engines which try to do FOL or even higher-order logic reasoning using the (1) representation. Theorem-provers and engines such as Cyc are
examples. However, nearly all of these in the background effectively do (2), transforming and indexing what they can to get efficiencies close to (3).
If you do mostly (1) automated reasoning, untransformed, you have to wait, wait, wait, and possibly interact with the theorem-prover. FOL is semi-decidable,
and so negative theorems may never be proved, though you wait forever. (3) automated reasoning typically is much faster, but you suffer: transformation of logical negation to finite-failure negation, open to closed world, perhaps some non-declarative semantics,
etc.
Also, in recent years, in addition to computational complexity, there has arisen descriptive complexity (http://people.cs.umass.edu/~immerman/descriptive_complexity.html),
a branch of finite model theory, which is an attempt to characterize the complexity of describing e.g., a property, vs. solving the problem expressed by the description.
Thanks,
Leo
From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx]
On Behalf Of Wartik, Steven P "Steve"
Sent: Monday, April 02, 2012 4:40 PM
To: Ontology Summit 2011 discussion
Subject: Re: [ontology-summit] Clarification re Big Data Challenges Synthesis
Ali,
<snip>
What do others think?
Best,
Ali
I think you’re right about one thing: the challenge of finding the right level of expressivity will elicit lots of comments.
We’ve been working with a set of OWL ontologies in which there are many restrictions of the form:
property exactly 1 owl:Thing
If you ask the Pellet reasoner to reason about these ontologies, you wait – well, Pellet doesn’t terminate overnight. If you replace these assertions with:
property some owl:Thing
which of course does not express quite the same semantics, Pellet terminates – in our case, requiring about 6 minutes. Not exactly real time, but acceptable for certain applications.
If you want real-time or near-real time performance, you have to cut most of the restrictions. So far we have been working only with T-box assertions and we anticipate very large numbers of A-box assertions. So I expect that, for the applications we have in
mind, cutting is what we’ll do.
(I want to make it clear I’m not criticizing Pellet. Other reasoners I’ve tried have their own problems. I mention Pellet because it worked, ultimately, and I measured the time.)
But the problem with figuring out what semantics to include or cut is that you don’t necessarily know the intended applications when you design an ontology. In an ideal world, an ontology
is reusable. That means two things. (More actually, but never mind that now.) First, you include enough semantics to let other ontologists know how an element relates to their needs – whether a class is equivalent to, a superclass or subclass of, overlaps,
or is disjoint with a concept they’re considering expressing as a class. Second, an ontologist doesn’t have to figure out semantics on his own – he benefits from your effort.
If you’re deliberately writing application-specific ontologies, you’re probably reducing the prospect of reuse. That would be a shame. At the very least, it has negative consequences
for a semantic web. I hope our experience is atypical. It doesn’t bode well for reusing domain-specific ontologies, at least in the near term.
Regards,
Steve Wartik