ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Early use of the word 'ontology' in AI

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>, "Barkmeyer, Edward J" <edward.barkmeyer@xxxxxxxx>
From: "Christopher Spottiswoode" <cms@xxxxxxxxxxxxx>
Date: Wed, 6 Nov 2013 12:27:31 +0200
Message-id: <001001cedada$cef98db0$6ceca910$@metaset.co.za>
Ed, many thanks for that "treatise" with its very necessary historical
perspectives.  Hopefully there'll be occasion for exploring them further
once I have finally achieved the brief exposé of ontological commitments
that I believe are usefully explicit in what I shall be writing today on the
subject of the wider and still unexploited roles of "ontology".    (01)

Gratefully...
Christopher    (02)

-----Original Message-----
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Barkmeyer,
Edward J
Sent: 05 November 2013 20:01
To: [ontolog-forum] 
Subject: Re: [ontolog-forum] Early use of the word 'ontology' in AI    (03)

The mind reels.      (04)

The relationship between logical theories and the "real world" is based on
the phenomenon of "ontological commitment" -- the decision to take certain
metaphysical ideas to be true.  It follows that a good formal ontology (in
the knowledge engineering sense) is clear about what those commitments are.
Pat's papers were very carefully based on a specific set of such
commitments.      (05)

One of the problems with many "ontologgers" is that they have no idea when
they are even making such commitments.  They don't ask themselves whether
there are other possible interpretations of their observations; they just
write down what they think they see.    (06)

With respect to selling the idea of "AI ontology" to skeptics, I stated my
position at one of the last DAML meetings:  The development of "ontologies"
is just an outgrowth of "information modeling" which is itself an outgrowth
of "data modeling".  This is evolutionary in the usual way of engineering
disciplines -- first you figure out how to do something useful that works
reliably over many variants, then you develop the theory for why that works.    (07)


[Begin unnecessary treatise]    (08)

Data modelers of the late 1970s and early 1980s were trying to make programs
and databases that "worked the first time", by having thought through the
problem of how to represent the world of interest in data forms BEFORE
trying to write most of the code.  So they abstracted the thinking from the
code per se, but they ended capturing that model of the world ONLY in some
implementation language form.    (09)

Information modelers of the 1980s and 1990s were trying to represent the
world of interest in a way that was independent of the implementation
language, and thus unburdened by its noisy pragmatic requirements.  But they
did not isolate their models from what OMG calls the "platform class" -- the
kind of tooling they will use.  That is why people who design relational
databases and people who design XML data structures and people who design
C++/Java programs came up with different models of the same problem space.
The early and most successful group was designing databases, the later group
was designing object-oriented programs, and the final (and least educated)
group was designing XML schemas.  And every <expletive> one of them insisted
that their models, which were influenced, even transmogrified, by their
implementation paradigm, were "natural".   With the exception of ORM and
Grady Booch's work, none of them had a formal theoretical foundation, as
distinct from a so-called "formal semantics".  The model ultimately meant:
this (presumed) behavior of the world of interest will be captured in an
implementation in this way.  But these approaches were still a great leap
forward -- they had the advantage of eliminating a lot of implementation
noise in capturing and *presenting* an understanding of the domain.    (010)

In the same time frames, part of the AI world developed parallel modeling
approaches, based on different intended implementation mechanisms, notably
"frame logics" and "description logics".  The difference was that these
approaches had some formal theoretical foundation at their heart, because
the implementation mechanisms did.  "Frame logic" models are object models,
to which a reasoning approach, rather than just a collection of subroutines,
is attached.  Description logic models are information models firmly founded
in set theory and restricted by the requirements of a particular reasoning
algorithm.  These reasoning approaches are simply different implementation
paradigms.  BUT they are grounded in a formal theory, rather than just a
reliable engineering practice, and that theory is the basis for the
implementation paradigm.  That is the difference.      (011)

OWL is not a great leap forward.  It is the next step in learning how to
model the world in way that can be used in a number of reliable engineering
practices, without getting deeply into the nuances of those practices.  OWL
models are not "natural" either -- you can't say anything the extended
tableaux reasoners cannot handle.  But they are not formally interpreted by
engineering practice -- the formal interpretation is a mathematical logic
theory.  To distinguish this kind of approach from the assorted forms of
"information modeling" of the 1980s and 1990s, we use a new term "ontology".
But it is just the third stage of evolution in modeling a world of interest
for the purpose of automating some tasks in that world.    (012)

As Natasha Noy once observed, a great many OWL models are just UML models.
Most of the classes are primitive, which greatly limits the value of the
reasoning algorithms associated with description logic.  All that means is
that the would-be knowledge engineers are just learning how to do
information modeling, and using OWL for the purpose (probably because it is
a W3C standard, i.e. comes from the acknowledged source of all useful
references for software engineering in the 21st century, as every illiterate
software engineer knows).  In the last 5 years, the UML folk have come up
with formal logic models for UML interpretation as well (with the probable
effect of invalidating the engineering interpretations of many existing UML
models).  This gives us an object-modeling language that has a formal
foundation, much like the information modeling language ORM (in 1990).  The
difference here is that the formal foundation does not lead to the use of
the model for automated reasoning.  OWL makes it possible to do that, even
if many of its users are just learning to build information models, and thus
do not make reasoning models in OWL.  (Presumably their intended platform
class is different.)    (013)

[end unnecessary treatise, which is probably inaccurate in several areas
that John Sowa will point out]    (014)

The gist of this treatise is that "ontologies" are just somewhere on the
upper end of an evolutionary curve in capturing the concepts in a world of
interest in a machine-readable form, for the purpose of communicating among
the stakeholders and automating some process.  It is a part of the
evolutionary process of developing a "computer science" -- a formal
theoretical basis for software engineering.  It is the current refinement of
"world modeling" technology after 50 years of development, and practitioners
use it with various levels of skill.    (015)

-Ed    (016)

P.S. Pat and Chris agreed on the principal reason why people are
legitimately suspicious of the latest IT buzzword (and "ontology" is among
them).  As long as press and funding are based on buzzwords rather than
results, the whole spectrum of professionals, from experts to nincompoops,
will very quickly use them.  The first step in vetting a speaker is to ask
him to define the term.      (017)


> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum- 
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of Christopher Spottiswoode
> Sent: Tuesday, November 05, 2013 10:57 AM
> To: '[ontolog-forum] '
> Subject: Re: [ontolog-forum] Early use of the word 'ontology' in AI
> 
> John,
> 
> Thanks for the definitions of terms around the word "ontology".
> 
> But may I expand on my request in the final paragraph of my last email?
> 
> I had said:
> " I am busy promoting ontology in a different forum at the moment.  
> The word seems to have some attraction there but I have discovered 
> that, like "semantic web", "ontology" is a sure turn-off out there to 
> a significant degree.  So I am interested in both the history of the 
> word and the current experience of Ontolog folks in this labelling
matter."
> 
> On the historical issue, though, Pat did say (in the quote in my 
> email) that "I wouldn't be [in favor of introducing a new term] if the 
> term really were new (and I resolutely ignored it when it was first 
> introduced) but its no longer new, [...]".  That certainly seems to be 
> an insistence that he was not the first to introduce the term.
> 
> Then, the quote you attribute to Pat in your email below was in fact from
me.
> It was my own tentative exploration for a reading that rendered Pat's 
> statements consistent.  I could surely learn from any attempt by this 
> Forum to spell out definitions of "logical theory" and "AI" that might 
> clarify the apparently diverse origins of the use of the term "ontology".
> 
> Thus the main definitions my exploration invited were those to help 
> relevantly distinguish the "logical theory" domain from that of AI.
> 
> I must confess I haven't studied Pat's 35-year-old papers on liquids 
> and naïve physics (though I do recall having ignorantly read about them at
the time).
> Nor would I characterize  the present central themes of my own current 
> project as either 'logical theory' or AI.  Ontology in that project is 
> both most strongly philosophical and of great technical value in 
> IS/IT, and the word is most useful in capturing that synergetic link.  
> But I recognize that I do need to gear up towards the relevant experts'
> exploitation of the ever more emergent opportunities for both advanced 
> 'logical theory' and AI in that project.
> 
> Ha-ha, yes, everyone can surely agree with your observation that "For 
> anybody who doesn't [know what they're talking about], all bets are off."
> 
> And one reason why I am still using the word "ontology", more or less 
> in the sense that ontologgers use it, despite my own inclinations 
> (that I had expressed in the sequel to the 2008 thread I had 
> resurrected in my previous email), is that whenever I can have a good 
> conversation with sceptics, from whatever field, within or outside IT, 
> they usually end up understanding surprisingly well at their respective
levels and asking to be kept up-to-date.
> 
> But that experience doesn't resolve the marketing or propagandistic 
> need that lies behind my query, in the final paragraph of my previous 
> post, as to how fellow-ontologgers experience the problem of the
ontology-sceptic.
> Such a sceptic is usually ignorant, as you point out, and is adopting 
> a very superficial and usually self-serving position.  So the obstacle 
> to pre-empt somehow is the first-glance though ignorant dismissal 
> without the opportunity for that "good conversation".  For example, 
> which of the one- liners or elevator pitches of recent Summits seem to 
> work best or most often?
> 
> Any takers on such queries?
> 
> In the meanwhile I am busy taking my own approach to the whole issue, 
> and will in due course draw the Forum's attention to my project's 
> present deliverable.  I am sure there are ontologgers who will be able 
> to improve it (even if by total replacement...).  But I would much 
> appreciate any observations or suggestions in the very short term.
> 
> Christopher
> 
> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
> [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F 
> Sowa
> Sent: 05 November 2013 14:57
> To: ontolog-forum@xxxxxxxxxxxxxxxx
> Subject: Re: [ontolog-forum] Early use of the word 'ontology' in AI
> 
> Christopher,
> 
> In that note, Pat and Bill were discussing some issues about proper 
> usage, but not the question of first occurrence of the word 'ontology' 
> in the AI literature.  Note Pat's brief summary:
> 
> PH
> > But the above concerns the term's use as a synonym for "logical 
> > theory", whereas the present thread is addressing its use in AI.
> 
> In any case, there is as much confusion about the word 'theory' as 
> there is about 'ontology'.  And the term 'logical theory' is less 
> common (199,000 hits on Google) than 'formal theory' (300,000 hits).
> 
> I'd say that every ontology is a theory.  But to be an ontology, a 
> theory must have an additional claim about the existence of the 
> entities that its variables refer to.  I'd also add the distinction of 
> formal vs informal theories and
> ontologies:
> 
> Theory:  A systematic set of assumptions and their implications.
> 
> Ontology:  A theory about what exists in some domain.
> 
> Comment:  Note that a theory or an ontology can be stated in ordinary 
> language.  But it must be systematic.  An observation that there is a 
> robin in your backyard is not an ontology.  But a detailed theory 
> about birds and their characteristic properties and relations could be
called an ontology.
> 
> Formal theory:  The deductive closure of a set of axioms stated in 
> some version of logic.
> 
> Formal ontology:  A formal theory about what exists in some domain.
> 
> Comment:  The adjective 'formal' as used in both philosophy and 
> computer science implies the use of a precisely defined notation for 
> making statements and drawing inferences -- i.e., a logic.
> 
> I believe that these four definitions correspond quite closely to the 
> usage in both philosophy and computer science -- at least by people 
> who know what they're talking about.  For anybody who doesn't, all bets
are off.
> 
> John
> 
> __________________________________________________________
> _______
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/ Community Wiki:
> http://ontolog.cim3.net/wiki/ To join:
> http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
> 
> 
> 
> __________________________________________________________
> _______
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/ Community Wiki:
> http://ontolog.cim3.net/wiki/ To join: http://ontolog.cim3.net/cgi- 
> bin/wiki.pl?WikiHomePage#nid1J
>     (018)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/ Community Wiki:
http://ontolog.cim3.net/wiki/ To join:
http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (019)



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (020)

<Prev in Thread] Current Thread [Next in Thread>