[Top] [All Lists]

Re: [ontolog-forum] What is "understanding" - was: Building on common gr

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Patrick Cassidy" <pat@xxxxxxxxx>
Date: Mon, 31 Mar 2008 19:59:05 -0400
Message-id: <05db01c8938b$33d4a410$9b7dec30$@com>
Just a clarification to answer some of John Sowa's concerns, with which I
mostly agree:
>  A static combination in "word
> experts" would imply that domain knowledge encoded in an English
> word expert would have to be rewritten for Russian or French.
   That is not the way I would view the structure or purpose of a Word
Expert. All language-independent knowledge should be in the ontology.  A
Word Expert would be able to use the knowledge in the ontology, and write
output meanings in the ontology language, but its primary function would be
to interpret the meaning of the word, using inputs from any syntactical or
semantic parser available as well as word-specific information such as usage
patterns.  It would disambiguate where a predefined sense fits, and
interpret nuances of meanings for a word by analogical reasoning where a
variant or metaphoric meaning appears to be intended.  
   So the Word Expert would encode the language-dependent interpreter, and,
yes it would vary a lot with languages, but only to the extent necessary -
i.e. to the extent that the languages themselves vary in the way that words
are used and combine.    (01)

Pat    (02)

Patrick Cassidy
cell: 908-565-4053
cassidy@xxxxxxxxx    (03)

> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
> Sent: Monday, March 31, 2008 5:49 PM
> To: [ontolog-forum]
> Subject: Re: [ontolog-forum] What is "understanding" - was: Building on
> common ground
> Sean, Sergei, and Pat C.,
> SB> For me a more interesting question is what can ontologies
>  > do reliably? and how reliably?
> I agree that is a more interesting question.  I was trying to
> discourage Pat C. from using the word 'understanding' and focus
> on the issues of how we should design the computer system.
> SN> First, our main problem is well beyond choosing the "right"
>  > knowledge representation schema: it is all about the content
>  > of knowledge, not the format.
> I agree that the content is more important than the format,
> but the kind of processing is also extremely important.
> Although I think logic is important, I also believe that
> induction, abduction, and analogy are at least as important
> and probably more important than deduction.
> SN> Second, the scope of work is so much broader than even
>  > the most sober among us had expected.
> I agree.
> SN> But the bad news is that I think our problem is more
>  > complex than either of those problems [the Manhattan
>  > Project or the Human Genome Project].
> Absolutely!  Those are almost trivial by comparison.
> In the 1960s, engineers thought that hardware design was
> more difficult/important than software design.  I would
> compare the genome to the hardware.  Although the amount
> of information in the genome is large, it's tiny compared
> to the amount of information in the brain.  Furthermore,
> the hardware/genome changes much more slowly than the
> information in the software.
> SN> This is one of the reasons why I think that no standards
>  > could really be enforced in this area and that it may be
>  > a noble but doomed task to try to come up with a single
>  > common syntax and semantics for the metalanguage for
>  > specifying knowledge about language and the world (whether
>  > these are different metalanguages or a single one).
> That depends on how you define "this area".  First-order
> logic has been around for over 125 years, and Common Logic
> is a minor extension of what Frege and Peirce independently
> discovered.  I don't think that's the solution to everything,
> but my major complaint about the Semantic Web is that they
> didn't build on the most successful application of FOL --
> namely, relational databases with SQL (or better, Datalog).
> As far as ontology goes, I have been arguing *against* any
> fixed upper level and in favor of standards that support ways
> of accommodating multiplicities of ontologies.  Alan Bundy,
> who has been working with logic for years, strongly agrees.
> JS>> Or would it be better to put all the knowledge about bridge
>  >> in a module that deals with bridge and all the knowledge about
>  >> tomatoes in a module that deals with tomatoes?
> SN> Any which way. Let it even be inefficient. But we need multiply
>  > cross-indexed descriptions of complex events with their subevents
>  > and participants, pre- and post-conditions and other properties.
> I agree.  But my main point was to avoid packaging the world
> knowledge too tightly with the lexicon -- partly because it
> would make it more difficult to reuse the knowledge with different
> languages (both natural and artificial).
> SN> BTW, there are many more kinds of ambiguity to deal with in
>  > addition to word sense or PP attachment...
> Certainly.  I just dragged out that example because it illustrated
> a few points related to the word-expert issues.  Word experts are
> irrelevant to resolving those other ambiguities.
> SN> The organization in our approach is by (ontological) elements
>  > of world knowledge but our lexicon expresses lexical meaning
>  > in terms of the ontological metalanguage...  So, in the example
>  > above, there will be in the ontology the event describing what
>  > happens when people play bridge, and there will be indications
>  > in the lexicon of any idiosyncratic word and phrase senses
>  > relating to bridge playing. Many meanings will still be derived
>  > in a compositional way, with the knowledge of the complex event
>  > of playing bridge serving as a (core) heuristic for making
>  > preferences during ambiguity resolution.
> As I interpret that paragraph, your ontological approach would
> put world knowledge into the language-independent representation
> and put language-dependent information in the lexicon.  I would
> endorse that approach.  And, I believe, it is very different from
> a word-expert parser that binds world knowledge with the words.
> It's important to use world knowledge during the parsing stage,
> but I would recommend a dynamic method of combining the three
> kinds of knowledge -- syntactic, lexical, and world knowledge
> -- during the analysis stage.  A static combination in "word
> experts" would imply that domain knowledge encoded in an English
> word expert would have to be rewritten for Russian or French.
> SN> However, the main issue is that we need to be able to make
>  > successful inferences against a knowledge base that is not
>  > sound and complete. That's reality. So, if logic can come up
>  > with methods that support such a task, great.  Otherwise, we
>  > scruffies will have to make do with whatever we can muster.
> I strongly agree.  But I view analogy as a "scruffy" way
> of using logic.  I consider formal deduction to be a special
> case of analogy, as in the article by Arun and me:
>     http://www.jfsowa.com/pubs/analog.htm
>     Analogical Reasoning
> PC> Those kinds of pragmatics are what would be included in the
>  > Word Expert, or in a 'topic expert' that would have a broader
>  > understanding of plants and gardening generally, not only tomatoes.
> I would be much happier with talk about topic experts than word
> experts.  A topic expert can be language independent and reusable
> in applications that use different natural *or* artificial languages.
> PC> As best I can tell, Cyc's language interpreter does not use
>  > Word Experts.  Big mistake.
> Topic experts would be great -- they could support Wittgenstein's
> notion of language games, which I like very much.  But word experts
> would be a disaster, because they're the opposite of modularity.
> There are three kinds of information to be represented:
>   1. Language-dependent, but domain independent information
>      about grammar.
>   2. Domain-dependent information about an open-ended
>      variety of subjects.
>   3. Lexical information about words and their connections
>      to the grammar (point #1) and the domains (point #2).
> If you combine #2 and #3 in word experts, you get a combinatorial
> explosion in the amount of *human* effort required.  And you make
> it impossible to reuse the knowledge in different languages.
> Modularity is extremely important, and that was the main point
> of my talk at FOIS'06:
>     http://www.jfsowa.com/pubs/dynonto.htm
>     A Dynamic Theory of Ontology
> John
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-
> forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
>     (04)

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (05)

<Prev in Thread] Current Thread [Next in Thread>