ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Expressivity and Useability (was, Proceedings of ...

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Patrick Cassidy <pcassidy@xxxxxxxxxxxxxxxx>
Date: Sat, 21 Aug 2004 15:18:36 -0400
Message-id: <4127A00C.7060604@xxxxxxxxxxxxxxxx>
Concerning Adrian's comments:    (01)

> 
> 1.  If we make things very expressive, the known inference techniques 
> are either NP-complete or uncomputable.
> 
> 2.  People seem to have a hard time specifying all but the simplest 
> tasks error-free in full FOL.  Long chains of quantifiers are 
> particularly hard to write and read correctly.  Diagramatic techniques 
> help with small examples, but we get lost in the spaghetti or in zooming 
> around on larger ones.
> 
> 3.  Even if a spec in some version of logic is correct, it may be hard 
> to follow the inferences that it makes, specially if they are based in a 
> machine-oriented notation.
> 
> 4.  The above points will likely place automatic inferences over the 
> future semantic web beyond the comprehension and control of computer 
> scientists, let alone business folks.
> 
> So, do we throw out RDF, OWL and logic, and start over?
> 
    I think it is important to distinguish clearly between the language
used to describe knowledge and the inferencing methods used to reason
about knowledge.  There is a relation, but the two are quite distinct.
    I believe that it is important for some applications -- especially
natural language understanding -- to use a maximally expressive
language to describe the concepts one is concerned about, so as
to restrict the interpretations of each specified concept to those
one intends and rule out others.  But using first-order logic to
specify meanings doesn't mean that one has to restrict the
reasoning to that supplied by a theorem prover.
     In order to do the kind of reasoning needed for NLU I think we
will need context-sensitive programs using the full power of some
standard programming language.  The necessary control of inference
required to avoid infinite-time execution will have to be supplied
by the programs themselves, as with many other complex programming
tasks.  To the extent possible, the inferences that can be generated
will be restricted by the declarative logical specification, but
there will always many possible inferences that are irrelevant to
the task at hand and these will have to be avoided by heuristics
that can recognize relevance, at least probabilistically.
     For some tasks -- and business transactions may be one --
it may be possible to use knowledge expressed in the restricted
notation of a description logic.  When that is the case, there
may well be advantages to doing it that way, but the less
restricted alternatives should be available where needed.    (02)

   As to the question of how to make the representations easy
for users to use:    (03)

> 
> 3.  we must tie machine oriented notations computationally to human 
> oriented notations
> 
> 4.  explanations, as close to English as we can make them, are going to 
> be essential if we are to have any idea what the future semantic web is 
> doing for (or against) us.
> 
> Her's a little example of a reasoning chain to try to illustrate the above.
> 
> . . .     (04)

     Protege is one tool that I think helps make ontologies comprehensible.
Other browsers and tools are also useful.  Some form of controlled English
(controlled natural language, generally) will also probably be very useful,
and I agree with Adrian that they should be available.  There may be more than
one variant of controlled English that will help casual ontology-builders
and domain experts enter or verify knowledge.  If done properly, such
a language should be easy enough to use so that it can be understood with
less than an hour of familiarization and can be used to enter simple
assertions with only a few hours of learning  -- perhaps a little longer.
So several variants may all be useful and may harmlessly coexist, as they
will all translate into first-order logic.    (05)

     Even so, specifying knowledge with any language -- by writing axioms,
using Protege, creating database tables -- requires that the knowledge-enterer
have some familiarity with the domain -- and the terms by which specific
concepts are labeled.  Knowing the unambiguous terms by  which a concept is
referenced will always be a non-trivial task.  Simple bits
of knowledge may be entered by casual volunteers, or extracted by text
processing and verified by domain experts.  Interactive knowledge-entry
tools will help, but the complicated interrelations of complex concepts
and their components, and especially the specification of the meanings
of semantic relations, will probably always need
to be entered by people who have taken some time to learn how to
say it precisely -- in any formal language -- at least until
Doug Lenat or some alternative builds a machine that can do it on its
own.    (06)

     Pat    (07)

=============================================
Patrick Cassidy    (08)

MICRA, Inc.                      || (908) 561-3416
735 Belvidere Ave.               || (908) 668-5252 (if no answer above)
Plainfield, NJ 07062-2054    (09)

internet:   cassidy@xxxxxxxxx
=============================================    (010)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Unsubscribe/Config: 
http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (011)

<Prev in Thread] Current Thread [Next in Thread>