ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] English number of words/concepts that cannot be comp

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Barkmeyer, Edward J" <edward.barkmeyer@xxxxxxxx>
Date: Tue, 6 May 2014 15:29:38 +0000
Message-id: <f7ba46503fff489391685356e9e84044@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
John,    (01)

This is getting out of hand, as usual.    (02)

Two points.  You wrote:
> Every logical restriction, constraint, or rule can be represented in 
>controlled NLs, such as STE or many others that can be freely downloaded.      (03)

The problem with controlled NLs is that they define the meanings of words, but 
they aren't very good at defining the meanings of constructs and sentences, 
unless they are extraordinarily restrictive.  That is where ontologies shine.  
Ever since Tarski we have understood how to assign an interpretation to a 
formal sentence.  And conversely, we don't really assign meaning to the "words" 
--  constants and predicates -- at all.  So, I would have said you were right 
the first time.  You need both:  human understanding of the primitive terms and 
formal representation of the utterances.    (04)

> UML tools are far more widely used than *any* tools that have come out of the 
>AI community -- *and* FUML specifies UML with the same degree of precision as 
>any linear logic.    (05)

Weeelll...  UML tools are used for many purposes, by an assortment of people 
with a wide range of modeling skills.  Most of those people, if the behavior of 
the UML tool vendors is any indication, use UML to draw parts of their 
implementations, and I would bet that more than half of those are actually 
inconsistent with fUML somewhere along the way.  A UML model of itself does not 
solve any application problem, and most users do not model their implementation 
in sufficient detail for the tool to generate a complete "solution".      (06)

By comparison, an OWL model can be fed directly to an OWL reasoning engine that 
can then solve some set of application problems.  OWL, like Java, is an 
implementation language.  But like other AI tools, and unlike the JVM, DL 
reasoners are not Turing machines; they are not capable of implementing 
arbitrary algorithms.  So, their usage is naturally much narrower.  And, with 
the possible exception of Prolog, AI tools typically exhibit this 
characteristic of a well-defined algorithm for solving only a certain class of 
problems.  So, UML will naturally be more widely used.    (07)

In addition, the skills required to produce an OWL model that solves a problem 
are different from the skills required to use Java to solve the problem.  Those 
Java guys are heirs to an approach to software development that has been able 
to solve real problems for 50 years, and those skills are therefore a part of 
the mainstream education in software engineering.  We have only recently 
acquired the computational power to make the AI algorithms practical for 
solutions to real problems, and we are only just learning how to use the 
knowledge base models that we develop for them.  In time, the AI technologies 
will eliminate a lot of applications of Java and its relatives, and ontology 
development skills will become part of the mainstream.     (08)

(When I was in school, electrical engineers rarely acquired any digital logic 
skills, because the transistor had only just been invented.  Now, digital logic 
is a pre-requisite for many electrical engineering applications.  The lesson of 
history is that a technology in its time changes the practice.)    (09)

There is already a community that models OWL ontologies in UML and turns a 
crank to spit out the OWL ontologies directly, in much the same way that a 
different UML tool crank turns out Java.  But at this time, it is still a very 
small community.  It will grow.    (010)

-Ed    (011)

> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F Sowa
> Sent: Tuesday, May 06, 2014 2:10 AM
> To: ontolog-forum@xxxxxxxxxxxxxxxx
> Subject: Re: [ontolog-forum] English number of words/concepts that cannot
> be composed of others
> 
> Ed and Pat C,
> 
> We agree on many points.  I believe that there is a way to combine tools such
> as STE (Simplified Technical English) and COSMO with taxonomies such as
> Schema.org.  Diagrams such as UML and others have proved to be very
> helpful, and they can be specified with the same level of precision as any
> linear notation for logic.
> 
> The issues that keep us from reaching a consensus are theoretical points
> about the nature of language and the boundaries between a controlled
> vocabulary, a taxonomy, and a formal ontology.  We could continue arguing
> about them forever.  Or we could design something useful.
> 
> EJB
> > [Pat's] goal of "accurate semantic interoperability among databases
> > and applications" is in fact to be attained by having effective
> > communication among the human authors of the databases and
> applications.
> 
> That goal is a prerequisite for everything else.  Specifications at that level
> have been used for IT applications with punched card machines and
> computer systems for over a century.  Formal ontologies may help.
> But until we replace humans, we need humanly readable spec's.
> 
> PC
> > The understanding of the meanings of the ontology elements will be
> > better than that typically obtained from people sharing definitions in
> > a controlled vocabulary because the ontology also has many logical
> > restrictions, evident in the text or in a viewer such as Protégé
> 
> Every logical restriction, constraint, or rule can be represented in 
>controlled
> NLs, such as STE or many others that can be freely downloaded.  UML tools
> are far more widely used than *any* tools that have come out of the AI
> community -- *and* FUML specifies UML with the same degree of precision
> as any linear logic.
> 
> PC
> > because the ontology is in a logical form suitable for reasoning...
> 
> That's important.  But the humanly readable form is *more* important.
> People have been using NL spec's for over a century, and they require such
> versions.  The logic-based notations are useful, but the NL forms are
> *essential* -- diagrams are also important.
> 
> PC
> > by trying to focus on the necessary semantic primitives, one keeps the
> > ontology to the minimum size that will accomplish the task. This makes
> > it easier to learn and easier to use.
> 
> I have *never* objected to having a methodology based on a limited set of
> defining elements.  I have *never* objected to research such as Anna W's
> for finding common semantics among multiple languages.
> 
> What I do object to are claims that any set derived from NL research can be
> sufficiently precise -- without further refinement -- for a formal ontology.  
>I'll
> accept it as a starting point for a vague, underspecified, upper level
> *taxonomy*.  But the precise, detailed reasoning is *always* at the lower
> levels.
> 
> Doug Lenat has said that for years:  the upper level has very few axioms, and
> all the significant reasoning is done with the middle and lower levels (the
> microtheories).  I agree with him.
> 
> PC
> > thus far I haven't seen any suggestions for alternative means to
> > general semantic interoperability that appear any more likely to
> > achieve the goal of accuracy.
> 
> I have no objections to that methodology.  I believe that the COSMO terms
> are good as any and better than most.  But I don't believe that *any* version
> is or can be the ultimate ideal.
> 
> PC
> > Even though learning a common foundation ontology takes some effort
> 
> I would *never* start teaching people the foundation.  It's much better to
> adopt the Schema.org strategy of starting with the terms that people can
> start using on day 1.  I have no objection to telling them that there are some
> useful defining terms -- if and when they are ready to sit down and study
> them -- but not in lesson #1, #2, or even #5.
> 
> In fact, Schema.org already has a large number of users.  So the *best* way
> to get them to use COSMO (or any other set of defining terms) is to show
> that the definitions of Schema.org stated in COSMO terms are
> (a) more readable, (b) more flexible, and (c) more precise, and *most* of all
> (d) easier to use than the current English paragraphs.
> 
> John
> 
> __________________________________________________________
> _______
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/ Community Wiki:
> http://ontolog.cim3.net/wiki/ To join: http://ontolog.cim3.net/cgi-
> bin/wiki.pl?WikiHomePage#nid1J
>     (012)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (013)

<Prev in Thread] Current Thread [Next in Thread>