ontolog-forum
[Top] [All Lists]

[ontolog-forum] Issues about logic, reasoning, and knowledge representat

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: John F Sowa <sowa@xxxxxxxxxxx>
Date: Mon, 26 Aug 2013 09:36:46 -0400
Message-id: <521B59EE.7050509@xxxxxxxxxxx>
I'd like to make some further comments about Leo's post about
Hector Levesque's recent paper at IJCAI:    (01)

    http://www.cs.toronto.edu/%7Ehector/Papers/ijcai-13-paper.pdf    (02)

Gary Marcus wrote an enthusiastic review of it:    (03)

http://www.newyorker.com/online/blogs/elements/2013/08/why-cant-my-computer-understand-me.html    (04)

After reading Hector's paper and Gary's review, I wrote some critical
comments in my previous post.  But then I followed a pointer in the
_New Yorker_ to another article by Gary Marcus:    (05)

http://www.newyorker.com/online/blogs/elements/2013/07/happy-birthday-morris-halle.html    (06)

In this one, Marcus wrote a tribute to the linguist Morris Halle
on his 90th birthday.  In the 1950s, Halle hired Chomsky at MIT, and
he later coauthored a book with Chomsky.  But the most intriguing
observation in the article was a remark by Chomsky:    (07)

GM
> “When I met Morris,” Chomsky wrote to me in an e-mail, “what struck
> me  at once was his uncanny ability to see the right answer even if
> he didn’t  have the arguments — and I often have found myself scurrying
> to try to  discover the arguments.”    (08)

In short, the discovery comes before the proof.  That is the fundamental
principle that mathematicians and chess players emphasize.  They *see*
the solution to a problem *before* they work out the details of the
proof.  See the quotations by Einstein and Halmos in slide 9 of    (09)

    http://www.jfsowa.com/talks/goal.pdf    (010)

In the terminology of C. S. Peirce, the discovery is abduction.
The process of verifying its correctness is deduction.  Computers
today are far faster and more accurate than humans in deduction,
but they are extremely weak in abduction.    (011)

In Levesque's article, I agree with the following point:    (012)

HL
> There is a lot to be gained by recognizing more fully what our own
> research does not address, and being willing to admit that other
> ... approaches may be needed.    (013)

But the following suggestions are what Cyc has been doing for the
past 29 years -- and Levesque never mentioned Cyc:    (014)

HL
> What about those hurdles? Obviously, I have no solutions. However, I do
> have some suggestions for my colleagues in the Knowledge Representation area:
>
> 1. We need to return to our roots in Knowledge Representation and Reasoning
> for language and from language. We should not treat English text as a 
>monolithic
> source of information. Instead, we should carefully study how simple knowledge
> bases might be used to make sense of the simple language needed to build 
>slightly
> more complex knowledge bases, and so on.
>
> 2. It is not enough to build knowledge bases without paying closer attention
> to the demands arising from their use.  We should explore more thoroughly the
> space of computations between fact retrieval and full automated logical 
>reasoning.
> We should study in detail the effectiveness of linear modes of reasoning 
>(like
> unit propagation, say) over constructs that logically seem to demand more.    (015)

To develop Cyc, Lenat and Co. devoted 29 years and over $100 million
to encoding the knowledge and methods of reasoning to needed to address
the kinds of problems that Levesque discusses.    (016)

The current version of Cyc could handle each of those examples, if the
proper knowledge had been encoded in its KB. But that's a very big IF.
To illustrate the issues, consider one of Hector's examples:    (017)

    Could a crocodile run a steeplechase?    (018)

A search of the WWW wouldn't turn up any examples.  But it wouldn't
find any examples for gazelles, and a gazelle would be more likely
to run and even win (but not if it had to carry a human rider).    (019)

This is an example that Cyc could answer if it had sufficient
knowledge about steeplechases, crocodiles, and gazelles.  But
a child could answer that question immediately after seeing
three short video clips:  a horse running a steeplechase,
a crocodile climbing out of a river, and a gazelle leaping.    (020)

A linguist, a mathematician, a chess player, and a child "see"
the answers (abduction) long before they can verbalize the reasons
(deduction).  The area where more research is needed is abduction.    (021)

On the role of deduction, I'd like to quote Larry Wos, a "blind seer"
who is one of the pioneers in automated theorem proving:    (022)

LW
> Will [OTTER] or will any other automated reasoning program replace
> scientists and engineers?  Never.  Unquestionably, automated
> reasoning has made great strides in the past few years.  But at
> most, I expect automated reasoning programs to enable people to
> devote their energy and time to bigger pictures.    (023)

Source:  Wos, Larry (1998) Programs that offer fast, flawless,
logical reasoning, Communications of the ACM 41:6, 87-95.    (024)

It's significant that Wos used the metaphor "bigger pictures."  Even
though he was congenitally blind, he has the equivalent of good visual
imagery:  he is one of the best blind bowlers in the US.    (025)

Abduction depends on the ability to "see" the bigger picture.
An intelligent computer system does not need better eyesight than
Larry Wos.  But it does need the ability to construct the equivalent
of mental models.  Another point by Wos:    (026)

LW
> One of my most important contributions to the field is the
> introduction, in 1963, of the use of strategy by automated
> reasoning programs.  I am almost equally proud of having
> introduced the term 'automated reasoning' in 1980; it captures
> far better than the traditional term 'automated theorem proving'
> the remarkable diversity of these computer programs.    (027)

The discovery of a strategy is itself a kind of abduction.  By shifting
the focus from theorem proving to reasoning, Wos emphasized the need
a broader range of methods.    (028)

In general, I agree with Levesque that AI research should return to its
roots in knowledge representation and reasoning instead of basing the
research on a bag of special-case tricks.  I recommended that approach
in a paper for a conference on information extraction in 1999:    (029)

    http://www.jfsowa.com/pubs/template.htm
    Relating templates to language and logic    (030)

More recently, our VivoMind company used this method in a competition
with a dozen groups, most of which used the so-called "mainstream"
methods of IE.  For the results, see slides 144 & 145 of goal.pdf.    (031)

John    (032)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (033)

<Prev in Thread] Current Thread [Next in Thread>