ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Quote for the day -- KR and KM

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Ed Barkmeyer <edbark@xxxxxxxx>
Date: Thu, 06 Jan 2011 13:12:02 -0500
Message-id: <4D2605F2.8050802@xxxxxxxx>

John F. Sowa wrote:
> EB:
>   
>> are you disagreeing with the assertion that the translation is
>> difficult/impossible?
>>     
>
> The question is translation from what to what.  To translate any
> of the popular ontology/modeling languages to Common Logic (or
> the IKL extension) is neither difficult nor impossible.  In fact,
> it's hard to find any common notation that *cannot* be translated.
>       (01)

Then we disagree.  It is my impression that capturing the semantics of a 
language like ORM in CLIF requires some significant effort to develop a 
set of enabling relations.  Terry Halpin's thesis makes explicit use of 
proposition nominalization (IKL 'that') with the appropriate constraints 
(Henkin) that allow it to have a first-order model.  But that doesn't 
make the first order model easy to formulate or easy to comprehend.  
Further, all of the closed world assumption stuff that permeates the 
semantics of conceptual schema propositions strikes me as needing 
special support in a FOL rendition.    (02)

I find it much easier to agree that it is possible to construct a formal 
semantics for a conceptual schema language, and not to be deeply 
concerned about its relationship to a first-order semantics.  In a 
similar way, there are formal semantic models for logic programming 
languages, and few people really care how they relate to first-order 
semantics.  The purpose of these languages is to enable the use of 
particular kinds of reasoning engines to solve particular application 
problems.  In general, one would not use a first-order reasoner like 
Otter/Prover9 to determine database consistency/integrity, or transform 
a Prolog program to first-order axioms.  The theoretical foundation may 
tell me that such a transformation is possible, but it has no use.    (03)

My tenet in all of this discussion is that Sjir's desire to compare 
conceptual schema capabilities with first-order ontology capabilities 
per se is apples and oranges.  But Sjir does raise a valid question:  
Show us an explicit business application that a comprehensible 
first-order ontology, or an OWL/DL model, and its associated reasoning 
engines can solve and that a conceptual schema and its associated 
reasoning engines cannot solve.  That is an entirely different 
question,  and it  is one we must answer if we are to sell ontology as a 
true improvement on 'information modeling'.    (04)

What we are seeing in industry to some extent is that people who never 
understood conceptual modeling, and people who are just being introduced 
to the idea, are being taught that OWL is a means of capturing the 
concepts you meant in your XML schema, and it frees you from having to 
describe relationships by tree structures and untyped pointers.  That is 
a great step forward.  And the fact that they are actually rediscovering 
the state of knowledge engineering in 1990 should not be a cause for 
concern (except that education in 'computer science' allows this kind of 
ignorance to occur regularly).  It will gradually sell the concept 
'ontology' in the industrial marketplace, and that is good.  Yes, John 
will be disappointed/annoyed that this new generation of information 
modelers is hallowing OWL, but it is much better than the last 10 years, 
in which information modelers used XML Schema (thereby repeating the 
ASN.1 experience of the 1980s, which fortunately got less press).  The 
greater danger is that in 2012 some modernized version of a crude 
modeling practice from 1972 will become the latest-and-greatest W3C 
standard, and OWL will be denigrated.    (05)

The DAMA people discovered that conceptual modeling only ever influenced 
a small group of expert data analysts in the 80s and 90s.  So they 
renamed it "business modeling" and "business rules" to get management 
attention, thereby increasing the community by one order of magnitude, 
from insignificant to barely visible.  Consultants know about these 
technologies, and they use them in executing the services they provide, 
but they are selling their services, not the technology.  Sjir is 
indirectly asking us to avoid the same fate, by knowing what it is we 
can do for industry, and selling that, as the technological advantage.     (06)

> But I would agree that translating one special-purpose notation
> to another special-purpose notation may be difficult or impossible.
>       (07)

That is not the issue.    (08)

> But if you translate both to CL, you can state rules in CL for
> relating their common subsets and providing ways of accommodating
> those aspects that don't have one-to-one translations.      (09)

Yes, you can construct bizarre first-order ontologies to convey 
non-first-order concepts.  But it is nearly impossible to use a 
first-order reasoner to do anything useful with ontologies like that.  
It is usually better to lose the concepts you can't easily express.  The 
point is that we don't ever want to suggest that this theoretical 
possibility is helpful in solving real application problems -- it is 
easier to use the tools for which that kind of 'ontology' was designed.    (010)

> I discuss
> the issues about "knowledge compilers" in the following article:
>
>     http://www.jfsowa.com/pubs/fflogic.pdf
>     Fads and fallacies about logic
>
>   
>> The main difference, however, is that these conceptual schema
>> languages have a semantic model based on a closed world assumption
>> -- that the information base is a 'model', not just the rest of
>> the ontology -- and in most cases they have some intrinsic notions
>> that are non-monotonic.
>>     
>
> There are many different, but related ways of handling non-monotonic
> and the CW and OW assumptions.  My recommended version is one that
> Alan Robinson (and others) proposed many years ago:  treat them
> as theory revision methods.  See below for an explanation.
>       (011)

More of same.  The semantics are importantly different, but we have 
theoretical work-arounds.  Yes, we do, but we don't apply reasoning 
engines to the theoretically transformed models to get results.  This is 
academic logician stuff, not practical knowledge engineeering.    (012)

> EB:
>   
>> [The ANSI/SPARC conceptual schema] was a technical architecture
>> for database specification.  All the rest is window dressing.
>>     
>
> Some DB people had a very narrow view, but many others had a much
> broader view.  During the 1980s, I participated in an IFIP working
> group on databases chaired by Robert Meersman.  That group included
> some dedicated logicians and AI people, and we organized a series of
> conferences on Data Semantics that invited speakers like Dana Scott,
> John McCarthy, Ray Reiter (and even me).  Many people in that group
> participated in writing the 1987 ISO report on the conceptual schema.
>       (013)

Yes.  That is, a different group of people produced a different 
technical report that described conceptual modeling.     (014)

I stand by my statement:  the 3-schema architecture is about data 
systems engineering, not conceptual modeling.  In the 3-schema 
architecture, the term "conceptual schema" means the database schema 
that integrates all the application views and describes the logical 
(relational) structure of the actual database.
This contrasts with "external schemas", which are "views" of the 
database that are presented to specific applications, and the "internal 
schema", which is the technical data structures that implement the 
"conceptual schema".  To borrow from Jack Ring, these two uses of the 
string "conceptual schema" are homographs that have some overlap in meaning.    (015)

The ISO Reference Model for Data Management (ISO 10032:1993?) 
generalized the idea of "conceptual schema" to be what ISO TR9007 means 
by the term -- a form of ontology divorced from particular data 
architectures.  The RMDM idea is that the integration of viewpoint 
models is conceptual and results in a conceptual model that is 
unburdened by concerns of data representation, organization, and access 
efficiency.  The implementation of the "integrating conceptual model" 
can be a single "conceptual schema" of the ANSI kind, in which case, 
there is a derivation relationship that should be formally documented.   
But it could also be a set of databases, each having its own "conceptual 
schema", which is treated as a view of the integrating conceptual 
model.  In such a case, there is a formal mapping from the conceptual 
model to the database conceptual schemas, and that mapping supports the 
operations of distributed database tooling.  (NIST and U. Florida built 
one of those in the 1980s.)      (016)

So John is correct that the "conceptual schema" idea from the ANSI 
3-schema architecture did not satisfy the ISO folk and ISO altered the 
meaning of the term to a definition that was more acceptable to John and 
the ANSI/SPARC DBSSG.  And OBTW, the ISO TR9007 group was the successor 
to the ISO group that produced TR8002 (with the same title) in 1984. 
Sjir Nijssen was one of the (leading) members of the earlier ISO group.     (017)

(My business has been information technology standards for 30 years, and 
the meaning of a term in a given standard is important to us picky 
standards people.  One would think, however, that the meaning of a term 
in context would be very important to knowledge engineers in general.)    (018)

> JFS:
>   
>>> No matter how big the upper level may be, it can only contribute
>>> a small part of the total that is needed for any significant application.
>>>       
>
> EB:
>   
>> I don't know whether we agree here or not.  If the application ontology
>> is the "lower level", then it can be pretty small.  Most of those I have
>> had to build have had about 150 classes and 300 properties.  The problem
>> is that 120 of the 150 classes are primitive.
>>     
>
> That depends on how much of the semantics you intend to specify.
> I have always maintained that a great deal of the semantics needed
> for interoperability can be defined with a type hierarchy and very
> few axioms.  Two programs might use the same data in very different
> ways, and neither one knows anything about what the other one does
> with the data.
>       (019)

The problem is knowing whether it is the same data, and the taxonomic 
approach only works if the communicating applications agree on the 
taxonomy.  I agree that formal statements of the properties that A 
assigns to X and the properties B assigns to X may not have much 
overlap.  But there must be some overlap, else there is no reason for A 
and B to use the same objects.  So the two have to agree on a set of 
common properties and those properties have some axiomatic nature.  Now, 
it may be that each level of John's taxonomy associates a single 
property to the instances of the classifier it defines, but even then 
you have at least as many properties as there are classifiers.  As I 
said, in my experience, the ratio is about 2-to-1.  On the other hand, a 
full-blown supply-chain ontology, a la AIAG MOSS, has 500 classes and 
2000 properties -- the difference is that some of the classifiers 
(party, order, item, shipment) can play a lot of roles and have a lot of 
relevant properties.    (020)

> EB:
>> To make the ontology more useful, you need to produce definitions
>> of most of the classes and properties, and that requires a much
>> richer and clearly founded middle layer.  I see my ontology as
>> a hut with multiple levels of sub-basement.
>>     
>
> Yes.  And all those levels for every application can require
> a very large amount of specification that can overwhelm what
> is in the hut.
>       (021)

Yes.  We agree.    (022)

On a historical note, John wrote:    (023)

> My (personal) goal had always been to focus on the semantics in a form
> that could support any and every special-purpose notation.  When I
> first started attending the X3H4 meetings, I was working at IBM, and
> I avoided any statements that conflicted with official IBM policy.
>       (024)

ANSI X3H4 ("Data Dictionaries and Models") was one of the examples of 
owner interference with standardization of a conceptual schema 
language.  And the ISO activity that it "tagged" was a UK-originated 
proposal for representing a "conceptual schema" (of the ANSI kind) as an 
SQL model. Many members of that committee were later involved in the OMG 
Meta-Object Facility standard of 1998 -- the four MOF meta-levels come 
from the ANSI data dictionary standard X3.138.  I think that John and I 
would agree that that body, as a whole, aimed low.  Tammy Kirkendall was 
the NIST lead, and she and I (re)wrote the ANSI X3.138 data dictionary 
exchange file specification, because it used ASN.1 (an XML precursor), 
and I was the NIST expert on ASN.1.    (025)

> But when I participated in the conceptual schema subgroup, I was
> actively proposing a logic-based semantics in a form that could
> support any and every vendor's syntax.  In 1992, I persuaded
> Mike Genesereth and Richard Fikes (the authors of the KIF report)
> to attend the SC32 meeting in Florida to begin a project on
> developing parallel standards for KIF and conceptual graphs.    (026)

And I was the NIST principal in X3T2 ("Information exchange standards"), 
which is where Mike and the would-be KIF standard went. X3T2 in 1990 was 
the scene of the failure to standardize a form of NIAM.  X3T2 was later 
absorbed into INCITS L8 ("Metadata repositories"), which is now the U.S. 
"TAG" for the ISO committee that owns Common Logic, and other unrelated 
work.     (027)

This is as much as I am willing to do of the Maurice Smith "you can't 
tell the standards players without a program" talks.     (028)

And one other bit:  In 1984?, I took an actual course in information 
modeling from CDC, and it was my first real encounter with the 
technology.  The guy who taught it was Sjir Nijssen!  Suffice it to say 
that John, Sjir and I have been on collision courses for years.   :-)    (029)

-Ed    (030)


-- 
Edward J. Barkmeyer                        Email: edbark@xxxxxxxx
National Institute of Standards & Technology
Manufacturing Systems Integration Division
100 Bureau Drive, Stop 8263                Tel: +1 301-975-3528
Gaithersburg, MD 20899-8263                Cel: +1 240-672-5800    (031)

"The opinions expressed above do not reflect consensus of NIST, 
 and have not been reviewed by any Government authority."    (032)


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (033)

<Prev in Thread] Current Thread [Next in Thread>