[Top] [All Lists]

Re: [ontolog-forum] mKR (was Thing and Class)

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Pat Hayes <phayes@xxxxxxx>
Date: Tue, 30 Sep 2008 12:48:53 -0500
Message-id: <6D7E1012-2A77-4167-9756-09349E02916C@xxxxxxx>

On Sep 30, 2008, at 11:55 AM, John F. Sowa wrote:

Pat and Chris,

Fundamental principle:  Any useful computational method for a
restricted language L can also be used when L is considered
a subset of a larger language, such as FOL.

True, but needs to be qualified. The restricted method CAN be used, of course, but in practice, it is only feasible if one knows that the logical problem one is tackling does in fact satisfy the constraints of the restricted language; and this may not be easy to compute. If there are many possible such restricted sublanguages (as there are of FOL: maybe several dozen by now, counting all the variants of the DLs) then it becomes infeasible to parse every piece of FOL to see whether it might fit into one or more of these syntactic restrictions. What is needed is  that each piece of logic comes with a declaration of some kind which identifies which subset or fragment is falls into; and if one includes this declaration as part of the syntax, then the result is something more than FOL, it is a kind of FOL-plus-metadata hybrid. 

That is a common technique used with Description Logics:  derive
a solution with a T-Box of DL statements, and use it with an A-Box
of statements in full FOL.  It is irrelevant whether the DL is
considered a separate language or a subset of the A-Box language.

It is relevant when these languages are transmitted across a network. If the A-box language is the language of transmission, then important information is lost. 

In fact, the semantics of the full system presupposes the semantics
of both the T-Box and A-Box as consistent subsets.

An even more common approach is to use a database (relational or OO)
with a knowledge base that has a more expressive logic.  Any result
derived by the DB is assumed to be true for the KB.  Again, that
method presupposes that the semantics of the DB is a subset of the
semantics of the full KB.

JFS>> The language in which a problem is stated has no effect on
  complexity.  Reducing the expressive power of a logic does
  not solve any problems faster; its only effect is to make
  some problems impossible to state.

PH> Well, no. It does both of these. It makes some problems
impossible to state, true. But it also can, and often does,
make it possible to solve the problems that it can state much
more quickly, because it reduces the size of the search space.

I agree that for many kinds of problems, multiple solutions are
possible, and that the larger language may allow more solutions.
However, many computational methods are designed to search for
a solution that has a "minimal model" and ignore the others.

But that gives a different semantics. Minimal models change the meaning of many logical primitives, most notably negation; and they give non-monotonic inference behavior. So this is no longer FOL. 

If the solutions derived with the restricted language L are
acceptable, the same solutions will be derived by same methods
with the same speed when L is considered a subset of the larger

As pointed out, a restriction to minimal models is more than simply a search space restriction: it changes the language. 

CM> Moreover, even if it were true that the complexity of a problem
is unaltered by the language in which it is stated, there is still
an important advantage to working in a decidable framework (when
complexity matter), viz., obviously enough, you know that any problem
you can state in the framework is, at least, decidable.  You thus get
an upper bound on complexity, for free, that you don't get in general
working in an undecidable framework like full FOL.

I completely agree.  But note that the complexity result is based
on the *algorithm*, independent of whether the algorithm is applied
to a restricted language L or to L considered as a subset of FOL.

Of course, but that is disingenuous, because some algorithms are incorrect for some logics. Minimal model semantics is a good example. Prolog is highly efficient, but it buys that efficiency by being invalid, when considered as a FOL reasoner. 

I strongly endorse the very common technique of using multiple
inference engines for different purposes.

So do I, but one needs to consider the consequences carefully :-)


As an example of the approach I recommend, I will cite, once more,
the technique that Bill Andersen and his group developed.  See
paragraph 3 of Section 6 from "Fads and Fallacies about Logic."
(And by the way, Jim Hendler, with whom I often disagree more than
I've ever disagreed with Pat and Chris, liked that paper very much
-- he was the editor of the journal in which it was published.)


Excerpt from http://www.jfsowa.com/pubs/fflogic.pdf

6. Using Logic in Practical Systems

The hardest task of knowledge representation is to analyze knowledge
about a domain and state it precisely in any language. Since the 1970s,
knowledge engineers and systems analysts have been eliciting knowledge
from domain experts and encoding it in computable forms. Unfortunately,
the tools for database design have been disjoint from expert-system
tools; they use different notations that require different skills and
often different specialists. If all the tools were based on a common,
readable notation for logic, the number of specialists required and the
amount of training they need could be reduced. Furthermore, the domain
experts would be able to read the knowledge representation, detect
errors, and even correct them.

The first step is to support the notations that people have used for
logic since the middle ages:  controlled natural languages supplemented
with type hierarchies and related diagrams. Although full natural
language with all its richness, flexibility, and vagueness is still a
major research area, the technology for supporting controlled NLs has
been available since the 1970s. Two major obstacles have prevented such
languages from becoming commercially successful:  the isolation of the
supporting tools from the mainstream of commercial software development,
and the challenge of defining a large vocabulary of words and phrases by
people who are not linguists or logicians. Fortunately, the second
challenge can be addressed with freely available resources, such as
WordNet, whose terms have been aligned to the major ontologies that are
being developed today. The challenge of integrating all the tools used
in software design and development with controlled NLs is not a
technical problem, but an even more daunting problem of fads, trends,
politics, and standards.

Although controlled NLs are easy to read, writing them requires training
for the authors and tools for helping them. Using the logic generated
from controlled NLs in practical systems also requires tools for mapping
logic to current software. Both of these tasks could benefit from
applied research:  the first in human factors, and the second in
compiler technology. An example of the second is a knowledge compiler
developed by Peterson et al. (1998), which extracted a subset of axioms
from the Cyc system to drive a deductive database. It translated Cyc
axioms, stated in a superset of FOL, to constraints for an SQL database
and to Horn-clause rules for an inference engine. Although the knowledge
engineers had used a very expressive dialect of logic, 84% of the axioms
they wrote could be translated directly to Horn-clause rules (4667 of
the 5532 axioms extracted from Cyc). The remaining 865 axioms were
translated to SQL constraints, which would ensure that all database
updates were consistent with the axioms.

In summary, logic can be used with commercial systems by people who have
no formal training in logic. The fads and fallacies that block such use
are the disdain by logicians for readable notations, the fear of logic
by nonlogicians, and the lack of any coherent policy for integrating all
development tools. The logic-based languages of the Semantic Web are
useful, but they are not integrated with the SQL language of relational
databases, the UML diagrams for software design and development, or the
legacy systems that will not disappear for many decades to come. A
better integration is possible with tools based on logic at the core,
diagrams and controlled NLs at the human interfaces, and compiler
technology for mapping logic to both new and legacy software.


Peterson, Brian J., William A. Andersen, & Joshua Engel (1998)
"Knowledge bus: generating application-focused databases from large
ontologies," Proc. 5th KRDB Workshop, Seattle, WA.

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx

IHMC                                     (850)434 8903 or (650)494 3973   
40 South Alcaniz St.           (850)202 4416   office
Pensacola                            (850)202 4440   fax
FL 32502                              (850)291 0667   mobile
phayesAT-SIGNihmc.us       http://www.ihmc.us/users/phayes

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>