OWL is the standard to represent ontology. This is reality.
We might think of OWL as a knowledge presentation layer standard. But TO DO
something with the knowledge, another set of languages (knowledge application
layer?) and related engines, like Cyc, might be needed. (01)
This might be highly dependent on "what to do" specifics (Classification of
intended usages?) (02)
Thoughts? (03)
Jeff (Yefim)

From: John F. Sowa [sowa@xxxxxxxxxxx]
I'm back from my travels, and I'd like to address a few issues about Cyc, about
the many ways of using logic, about "robustness", and about how any or all of
these issues affect practical problems that people are desperately trying to
solve.
By the way, I'll make some strong positive statements about Cyc in the
following remarks, but I also have strong criticisms about many aspects of Cyc.
But on the particular issues in this thread, I have a great deal of sympathy
with the Cyc approach.
PH> The real difference here is one of academic style: the CycL
> developers are ruthlessly pragmatic and do not care a whit for > theoretical
>analyses of completeness or for proving correctness.
One has to distinguish the many different "CycL developers". As the principal
stockholder, Lenat has the final say on what directions are supported. But the
complete Cyc system has over three dozen different inference engines, which are
based on a wide range of different principles.
Over the past 26 years, Lenat has hired (and fired) some of the best and
brightest logicians, linguists, computer scientists, and experts in various
fields, disciplines, and paradigms. Many of them have been very sensitive to
the theoretical issues, and they have designed inference engines that are as
technically respectable as any that have come from any academic department.
Some of the Cyc inference engines process subsets of logic that are eminently
decidable, others use an openended variety of heuristics, and others use very
"quick and dirty" methods of getting results for the "easy" cases.
IH> OWL's expressive power could, of course, be easily (indeed
> arbitrarily) extended if one were prepared to compromise on > some or all of
>these design constraints.
I suspect that the word 'compromise' is being used as a criticism rather than a
desirable trait. Among the Cyc inference engines, some are highly disciplined
tools that don't make any compromises with precision or decidability. But the
total Cyc system has to accept anything and everything that anybody might throw
at it.
In that sense, Cyc makes compromises in a positive way: it analyzes the problem
at hand in order to choose which of the many inferences engines to use. In
cases of doubt, it can run more than one in parallel to see which one finishes
first.
PH> Ian is using "complete" here to mean "complete and decidable",
> which can be characterized as: if a sentence is a theorem, then
> the prover will tell you that  completeness  AND if it isn't a
> theorem, the prover will also tell you that it is not.
Yes, but I'll also add that a huge number of very practical problems are
handled by model checking rather than theorem proving.
The SQL WHEREclause, for example, uses the full power of FOL for queries,
constraints, and triggers. But it doesn't attempt to prove those statements.
Instead, it evaluates their truth value in terms of a model  namely, the
current state of a relational DB. That evaluation is done in polynomial time
in the worst case, and in many of the most common cases, in linear or
logarithmic time.
That is the advantage of having a single very expressive language such as CycL
(or Common Logic) and a method for determining which of the many inference
engines (or other tools) to use for any particular problem.
IH> In fact such reasoners are typically used in a way that is
>> actually incorrect, in that failure to find an entailment is >> treated as
>a nonentailment, whereas it should be treated as >> "don't know".
PH> I dont think it is fair to say that they are *typically* used
> in this incorrect way. (?)
I agree with Pat. The difference between classical FOL and negation as failure
is very well understood by Lenat and the people he hired.
The people who don't understand that difference are congenitally incapable of
using *any* logicbased tool in a serious way  that includes CycL, CL, OWL,
and even SQL.
(Please don't say that nonAI professionals can use OWL, because I've seen what
they've done and it proves my point. For the subset of OWL they actually use,
they'd be better off using simpler tools.
And yes, Adrian, your Executable English would be a good option.)
IH> I completely agree that being decidable is no guarantee that
> reasoning tools will always work well in practice. We can imagine > a graph
>with complexity classes on the xaxis and robustness on > the yaxis.
I think we all agree on that point.
IH> Increasing complexity inevitably means decreasing robustness.
> Robustness is a very important quality of reasoners from a user > POV 
>what it means is that small changes in the input (e.g., > the ontology)
>produce only small changes in the performance of > the reasoner. We can think
>of undecidability as simply being > a very high complexity class, i.e., one
>where we can expect > relatively poor robustness.
Every sentence in that paragraph requires so many qualifications and caveats
that it is hopelessly misleading. First of all, every version of formal logic
from Aristotle's syllogisms to the present is notoriously brittle, and the
primary source of brittleness is
*not* the complexity class.
The major source of brittleness is the definitions of terms.
Everybody from A to Z (Aristotle to Zadeh) has observed that words in ordinary
language don't have a onetoone mapping to definitions in any formal logic.
URIs that link to formal definitions are irrelevant when the people who use the
terms don't know or understand the definitions. I'd give Cyc much higher marks
in addressing the issue of trying to determine (i.e., guess) the intended word
senses than systems (automated or
manual) that map words to arbitrary URIs.
I would also question the term 'small changes', the definition of robustness in
terms of relating "small changes", and the assumption that OWL (or any formal
system from Aristotle to the present) contributes much, if anything to the
solution.
And I repeat the point that decidability depends on what you do with the logic
rather than the expressive power of the logic.
For example, the SQL method of processing an FOL statement is more robust than
most theorem provers  primarily because the SQL evaluation takes fewer steps
than most proofs.
IH> There are other users who have the opposite problem 
> they want/need a more expressive ontology language...
Nobody but a professional knowledge engineer knows what an ontology language
is, let alone what properties it should have.
Those people who have been exposed to talks about such things are hopelessly
confused.
IH> Bottom line: there is no "right choice" of ontology language.
I certainly agree with that statement. I'd add that only highly trained
experts (knowledge engineers, logicians, or computer
scientists) are capable of making such a choice.
I'll also claim that the Cyc software is far more capable of choosing an
appropriate subset of logic and an inference engine to process it than the
majority of people who have been exposed to lectures about OWL or any other
formal logic. (04)
From: Ian Horrocks [ian.horrocks@xxxxxxxxxxxxxxx] (05)
I want to respond to some of the minor technical issues raised by Pat, but as
these aren't very important I will come back to them later. The crucial point
is the one about decidability and complexity, as this is where there always
seems to be some miscommunication about the goals of and claims for OWL. I
completely agree that being decidable is no guarantee that reasoning tools will
always work well in practice. We can imagine a graph with complexity classes on
the xaxis and robustness on the yaxis. Increasing complexity inevitably means
decreasing robustness. Robustness is a very important quality of reasoners from
a user POV  what it means is that small changes in the input (e.g., the
ontology) produce only small changes in the performance of the reasoner. We can
think of undecidability as simply being a very high complexity class, i.e., one
where we can expect relatively poor robustness. (06)
OWL also has very high complexity, although not as high as "undecidable". In
spite of this it has proven possible to develop OWL tools that work well in
typical cases  this wasn't an accident, it was a design goal for the
language. This has been crucial to the adoption of such tools in practice 
most users expect/require the reasoner to be both fast and reliable (always
give an answer, and always give the right answer), and they are surprised, not
to mention indignant, if this is not the case. The high complexity inevitably
means, however, that this good behaviour is not always robust  small changes
to an ontology can result in large changes in performance, and some ontologies
are still difficult or impossible to deal with. Developing new optimisations to
improve performance on such (classes of) ontologies is an ongoing (and
inevitably neverending) battle. (07)
For some users, this lack of robustness is not acceptable, and this is why OWL
2 includes several profiles  language subsets where reasoning has a lower
worstcase complexity. This is not just of theoretical interest  the lower
complexity means that we can develop reasoners whose performance is much more
robust. (08)
There are other user who have the opposite problem  they want/need a more
expressive ontology language. Of course they could move out of the FOL subset
that is OWL. The result will inevitably be that reasoner performance is even
less robust. Given the existing SOTA for FOL reasoning tools the loss of
robustness w.r.t. typical OWL tools would be dramatic  e.g., instead of the
reasoner occasionally failing to compute the answers to all subsumption
questions, it would invariably fail to do so. This may change with time, but it
is where we are now. Whether or not this is acceptable is a decision that only
users can make  you pays your money and takes your choice. (09)
Bottom line: there is no "right choice" of ontology language. OWL is intended
to be good compromise between expressive power and robust tool performance.
Perhaps more important than the choice itself was making *some* choice.
Standardisation has allowed for the development of a range of tools,
infrastructure and applications that could previously only have been dreamt of.
Hopefully all members of the ontology community can see this as a positive
development. (010)
 (011)
Response to some minor technical issues raised by Pat: (012)
When I said complete I really meant complete and terminating  I was being
ruthlessly pragmatic and ignoring the fact that "false" answers could always be
returned if infinite resources were available. In fact most SOTA FO theorem
provers aren't even complete in this theoretical sense, because for efficiency
reasons they start discarding clauses when the clause set becomes
inconveniently large. In practice, this means that such reasoners are rarely
able to prove satisfiability (nontheorems)  see, e.g., the experiments we
performed using Vampire (consistently the best performing FO theorem prover)
for OWL reasoning [1]. (013)
As for correctness, it simply isn't true that this is easy to prove. For
theoretical algorithms, soundness *is* typically easy to prove, where I use
soundness in the sense of returning only correct "true" answers to the problem
that the algorithm is "naturally" solving  which is satisfiability in the
case of model construction provers, and unsatisfiability (theorems) in the case
of deductive provers. It is much more difficult to prove that the reasoner is
correct if it says "false". In the case of model construction, for example,
this means proving that if there exists a model, then the algorithm will always
succeed in finding one. Proving termination, which I take to be essential for
correctness in case the logic is decidable, is even more difficult. Finally,
the algorithms used in practice are *much* more complex and include a wide
range of sophisticated optimisations; this makes it *much* more difficult to
prove that they are correct (or even sound). (014)
Regarding my claim that reasoners are typically used in a way that is actually
incorrect, to the best of my knowledge none of the incomplete reasoners in
widespread use in the ontology world even distinguish "false" from "don't know"
 whatever question you ask, they will return an answer. Thus, in order to be
correct, applications would have to treat *every* "false" answer as "don't
know". I don't know of any application that does that. (015)
[1]
http://www.comlab.ox.ac.uk/people/ian.horrocks/Publications/download/2004/TRBH04a.pdf (016)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontologforum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontologforum/
Unsubscribe: mailto:ontologforumleave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgibin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontologforum@xxxxxxxxxxxxxxxx (017)
