ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Foundations for Ontology

To: Ali SH <asaegyn+out@xxxxxxxxx>, "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "John F. Sowa" <sowa@xxxxxxxxxxx>
Date: Wed, 28 Sep 2011 14:22:41 -0400
Message-id: <4E8365F1.9000600@xxxxxxxxxxx>
Ali,    (01)

ASH
> [Daoud Clarke] claims to provide a novel foundation for (computational)
> meaning, specifically in what he calls a "context-theoretic" framework
> (explicitly making the analogy to a model-theoretic framework):    (02)

I wouldn't consider his method an alternative to a model-theoretic
framework.  In my recent slides, I criticized the methods of formal
semantics as a way of dealing with the full complexity of NLs.  But
I consider human mental models to be "model theoretic" -- just more
flexible than Tarski's or Montague's.    (03)

I also believe that statistical methods are important for many
purposes in AI and computational linguistics.  LSA, for example,
is useful for information retrieval and related applications.    (04)

But I strongly disagree with the following point by Clarke (from p. 2
of http://arxiv.org/PS_cache/arxiv/pdf/1101/1101.4479v1.pdf ) :    (05)

DC
> Current techniques such as latent semantic analysis work well at the
> word level...    (06)

To say "work well at the word level" depends on the kind of work.  LSA
vectors do *not* represent the meaning of a word in a way that you can
use to (a) explain the meaning to another human being or to a computer
system, (b) reason by any kind of induction, deduction, abduction, or
analogy, (c) answer detailed questions about the content of a document.    (07)

Whatever people have in their heads enables them to do a, b, and c.
But no program could use LSA vectors to do any of those things.
Ergo, LSA is *not* a semantic representation that captures what
people mean by a word.    (08)

DC
> If we knew how such vectors should compose then we would be able
> to extend the benefits of the vector based techniques to the many
> applications that require reasoning about the meaning of phrases
> and sentences.    (09)

No.  Those vectors do *not* represent the meaning of word.  They
are rough approximations that lose essential aspects of meaning.
Composing them can't recapture the lost meaning.    (010)

The greatest advantage of formal semantics is that they do allow
the meanings of words to be combined in meaningful ways.  You can
translate the logical form to representations that can do things
along the lines of a, b, and c above.  My criticism was only that
current formal methods are inefficient and inflexible.    (011)

I want to contrast LSA with the methods that are used to encode
chemical graphs in numeric codes or to the VivoMind methods that
translate conceptual graphs to numeric codes.  Those methods encode
*both* the structure of a graph and the labels on the nodes.  The
LSA methods use only the labels and throw away the structure.    (012)

Furthermore, the VivoMind methods have both reversible encodings
and lossy encodings.  Reversible encodings can be translated back
to a graph that is logically equivalent to the original.  Lossy
encodings use the same algorithms, but they compress the information
to reduce the size of the indexes.  But both encodings incorporate
*structure* and *ontology*.    (013)

John    (014)

PS:  I revised http://www.jfsowa.com/talks/ontofound.pdf to clarify
the issues about formal semantics.  See the revised Slide 42 and
the new Slide 43.  Those methods contrast human mental models with
formal models.  But any kinds of models -- human-like or formal --
are much closer to each other than either is to LSA.    (015)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (016)

<Prev in Thread] Current Thread [Next in Thread>