ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] History of AI and Commercial Data Processing

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "John F. Sowa" <sowa@xxxxxxxxxxx>
Date: Wed, 24 Jun 2009 01:51:38 -0400
Message-id: <4A41BEEA.7030307@xxxxxxxxxxx>
John B., Ed, Ron, and Randall,    (01)

JB> It seems to me that AI has long abandoned practical application
 > areas in favor of the theoretic work currently in vogue...
 >
 > Clearly the future of AI, cognitive science or semantic processing
 > must include tight coupling with real world problems.    (02)

EB> I think the scope of this statement is too grand.
 > Do we regard AI as
 >   - primarily a (natural) science?
 >   - primarily a formal or philosophical discipline?
 >   - primarily an engineering discipline?    (03)

All three of those areas have been developed in detail over the past
half century, and many AI researchers/developers have moved freely
from one to the other and back again.    (04)

However, the AI culture has been so disjoint from the mainstream
of computer applications that many AI applications haven't been
as commercially successful as they could have been.  The following
example is typical:    (05)

RW> http://www.youtube.com/watch?v=bYYonyqHIoc
 > Where does this fit into the AI progression?
 > Does anyone know the underlying technology base.    (06)

RRS> If I were to guess, I'd have to say the technology is the
 > venerable "rigged demo."    (07)

I believe that it's more than a totally rigged demo, but less than
what it seems -- a computer system that fully understands language
including the emotional aspects.  I suspect that it is closer to
ELIZA, the program from the 1960s that simulated a psychiatrist.    (08)

Instead of really understanding the patient, ELIZA responded to
cues from typed input, classified the patterns, and selected one
of several canned responses for each pattern.  Milo has far more
sophisticated graphics and pattern matching, but I suspect that
its "intelligence" is basically an upgraded ELIZA.    (09)

JB> It will be interesting to see the outcome of the DARPA research
 > in the next few years, if that info is made public.    (010)

Despite my affiliation with AI, I have more hope in practical
applications coming from computer game technology -- largely
because the people who develop games are professionals who are
disciplined by the need to produce results that people are
willing to pay for.    (011)

EB> The development of AI technologies -- algorithms for reasoning
 > -- is an engineering discipline.  Its objective is to produce
 > useful tools for reasoning to effect about real-world problems.    (012)

I agree that the goal of engineering is to solve real-world problems.
But the AI culture has been so isolated from the mainstream of
application development that most AI researchers wouldn't recognize
a real-world problem if they stumbled over it.    (013)

A prime example is the insanity of designing the Semantic Web
without recognizing that every major web site is built around
a relational database.    (014)

JFS>> AI is dominated by brilliant people who are totally out of touch
 >> with anything and everything that goes on in the field of commercial
 >> data processing.    (015)

EB> My gut reaction is: and rightly so.  Most commercial data processing
 > is not very interesting.  The technologies needed to do it well were
 > devised over the 30 years 1965-1995 and they are heavily and reliably
 > used.    (016)

First of all, I would never say "rightly so."  Too many researchers in
every field are prima donas who are afraid to get their hands dirty.
I have no sympathy for that attitude.    (017)

I'll agree that many commercial applications are not as exciting,
but many of them, especially the larger ones, are very challenging.
For examples, just look at some of the large projects that the US
gov't and various large corporations pay for, but turn out to be
very expensive failures.    (018)

EB> The really interesting commercial processing began to benefit
 > from AI and OR technologies 30 years ago...    (019)

AI technology was migrating to commercial applications 50 years ago.
Just look at the contributions from LISP:    (020)

    Recursive functions, list processing, the if-then-else statement
    including multiple elseif's, automatic storage management and
    garbage collection, lambda expressions, functional programming,
    metalevel programming, the ability to manipulate programs, etc.    (021)

Those are just from LISP.  The full range of all the technology
contributed by AI research is enormous.  However, AI didn't get
credit for the applications, because the AI researchers let
other people do the dirty work.    (022)

I'll agree that many commercial applications are not as exciting,
but many of them, especially the larger ones, are very challenging.
For examples, look at some of the large projects that the US gov't
and various large corporations pay for, but turn out to be very
expensive failures.    (023)

EB> The cost of enabling the great ideas to be useful is very high.
 > And the putative revolution has to produce a return on that
 > investment.    (024)

For many years, I was telling Doug Lenat that he should devote more
attention to implementing applications.  But he said that he didn't
want to dilute the "pure" research by diverting resources to
applications.    (025)

I believe that attitude was counterproductive both for Cyc and
for the broader field of AI.  First of all, if Lenat had devoted
say 20% of his staff to developing applications, they could have
brought in enough money to support more than that number of
additional researchers.    (026)

Second, doing research without any clear idea of how it is going
to be used is a recipe for creating products with no clear use.    (027)

Third, pure mathematics has benefited enormously from applications,
ranging from surveying land in Egypt to modern physics.  Without
that stimulus, mathematics would be in a primitive state.    (028)

EB> An EU study ending in 2007 concluded that we now have a lot
 > of AI tooling, but we don't have much encoded knowledge.    (029)

I disagree with that study.  At VivoMind, we have been getting
excellent results from automated and semiautomated methods
for *learning* the knowledge.  I believe that Cyc would have
discovered that point years ago, if they had worked on real
applications.    (030)

John    (031)


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (032)

<Prev in Thread] Current Thread [Next in Thread>