ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Foundation Ontology Primitives

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>, "John F. Sowa" <sowa@xxxxxxxxxxx>
Cc: "ian@xxxxxxxxxxxxxxxx" <ian@xxxxxxxxxxxxxxxx>
From: Duane Nickull <dnickull@xxxxxxxxx>
Date: Tue, 2 Feb 2010 20:30:19 -0800
Message-id: <C78E3BDB.AD9E%dnickull@xxxxxxxxx>
Interestingly enough, the UN/CEFACT core components working group tried to create a similar crosswalk between various business languages.  This  work focused only on business information objects and did not include the theory or mereotopology or axioms and it proved to be very difficult.  They ended up initially with about 17 primitives (date, address, business entity etc...).  The names I cannot exactly remember but they used a context matrix to map the primitives to more specific domains.  For example, a “date” primitive would become a ProcurementProcess.Invoice.BuyingParty.Address and take on additional properties based upon the context in which the process was used.  This included modifying many aspects such as the syntax which was cleanly separated from the concept.   

At the time we did this, I was too naïve to appreciate the pure genius of the work and what proved to be very relevant was the methodology for testing for mapping and building new core components as well as the context matrix which was essential to map generic primitive concepts to nodes in existing taxonomies.  Companies like Proctor and Gamble, SAP and Sun went about working on these concepts for real world enterprises.  As time goes on I find myself appreciating the work more and more.

I would be happy to find the original work and bring it to this forum if anyone things the methodology may be relevant.

Duane


On 2/2/10 7:57 PM, "Rob Freeman" <lists@xxxxxxxxxxxxxxxxxxx> wrote:

John,

Had more general comments on your earlier post in "Foundation
ontology, CYC, and Mapping", but I think this is more important.

On Wed, Feb 3, 2010 at 6:20 AM, John F. Sowa <sowa@xxxxxxxxxxx> wrote:
>
> I realize that Rob has serious doubts about the value of formal
> theories for language understanding, and I share those doubts.
>
> But for implementing computer applications, precise definitions
> that can be stated in logic are essential.  I would use informal
> methods (in some ways similar to what Rob has been proposing) to
> analyze NLs and map them to and from the more formal notations.

I agree precise definitions which can be stated in logic are essential
for computer applications, but I don't think that logic can go much
above the level we discussed in the "new logic" thread some weeks
back:

On Wed, Jan 20, 2010 at 4:57 AM, John F. Sowa <sowa@xxxxxxxxxxx> wrote:

"For some pairs of (M,T), the predicate Will_Halt can be
determined by a proof in FOL.  But for others, the theorem
prover will loop forever.  But any (M,T) that is undecidable
in FOL will be just as undecidable in English or any other
language, formal or informal."

You can have logic above that level, but it must be "logic" derived
from that process, in general "computationally irreducible",
"undecidable", or "uncertain".

Now, the thing is, that such processes exhibit "computationally
irreducible", "undecidable", or "uncertain", behaviour is actually
good. They explain it.

I'm glad you share my doubts about the value of formal theories for
language understanding, but I think language is just the most obvious
example of this. Our noses are rubbed in this with language. We see it
first with language. But I believe that is only because language is
the most obvious window into what is going on in our heads.

This fundamental indeterminacy was eventually shown to apply to mathematics too.

Can we doubt it applies generally to all knowledge representations?

Well, we can't. We're discussing what to do about it now.

The good thing is, since we already have computational processes for
"learning" meaning, theories, etc. initially from text, we have an
obvious means of modeling all the indeterminacy in our theories, a way
to generate it, a way to select one theory over another or move
between them. In general we expect "learning" processes to exhibit
this. They are a model for it.

-Rob

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx



---
Adobe LiveCycle Enterprise Architecture - http://www.adobe.com/products/livecycle/
My TV Show - http://tv.adobe.com/show/duanes-world/
My Blog – http://technoracle.blogspot.com/
My Band – http://22ndcenturyofficial.com/
Twitter – http://twitter.com/duanechaos

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>