ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] History of AI and Commercial Data Processing

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: John Bottoms <john@xxxxxxxxxxxxxxxxxx>
Date: Thu, 25 Jun 2009 01:10:37 -0400
Message-id: <4A4306CD.2020406@xxxxxxxxxxxxxxxxxx>
Christopher Menzel wrote:
 > On Jun 23, 2009, at 3:00 PM, Randall R Schulz wrote:
 >
 >>On Tuesday June 23 2009, Ron Wheeler wrote:
 >>
 >>>http://www.youtube.com/watch?v=bYYonyqHIoc
 >>>Where does this fit into the AI progression?
 >>>Does anyone know the underlying technology base.
 >>
 >>If I were to guess, I'd have to say the technology is the
 >>venerable "rigged demo."
 >
 > Word.
* * * * * * * * * * * * * * * *
re: Project Natal: Milo
"Project Natal: Milo a Fraud!"
http://boredatworkgamer.wordpress.com/2009/06/08/project-natal-milo-a-fraud/    (01)

Still, Milo raises questions about what constitutes a valid
cognitively compelling human-computer interface. I'm not
real comfortable with the Turing test for a number of reasons.
First, it is almost an expression of a tautology that only
acknowledges that we communicate intelligently with something
that also communicates intelligently.    (02)

Second, the base premise is that the machine is lying about its
supposed humanhood. Now, one can argue that this is proof
reducto-pick-your-favorite-logic, but it still leaves me with
the notion that when I meet a stranger, I should lie to him about
something in order to judge how intelligent he is. I don't buy
that. I would like to think I can detect some level of intelligence
based on other criteria.    (03)

That is not to say that humans are particularily good at discerning
a given machine's intelligence. We know that it doesn't take many
flashy lights in a slot machine to convince some of not only IQ,
but intent and empathy.    (04)

Eliza raises another interesting question. It certainly points out
the truism that as soon as we understand the algorithm we deem it
not intelligent. Eliza is the case of a smarter slot machine. And
JohnS's view of Eliza is interesting in that it could also be
considered a partial view of some human conversation I have
experienced so I view Eliza as a step that needed to be taken.
(John's comment copied below)    (05)

"Instead of really understanding the patient, ELIZA responded to
cues from typed input, classified the patterns, and selected one
of several canned responses for each pattern.  Milo has far more
sophisticated graphics and pattern matching, but I suspect that
its "intelligence" is basically an upgraded ELIZA."    (06)

Finally, it seems to me that much of our discussion about
ontologies leave us in an over-constrained situation. Without
a problem statement to work against we have to fend for ourselves
in determining the structure and tool sets needed to create the
systems needed to move forward. I've looked at a few lists of
components, and what is missing is a legend that maps the uses
to the requirements for each tool. And, without an a priori
architecture it is not clear how each tool would be integrated
into a system, short of reproducing the documented test
environment. I've got a bone to pick with JDerrida about the
integration after the deconstruction, but I'll wait on that.
Work on such a legend would be of benefit to all participants.    (07)

-John Bottoms
  First Star
  Concord, MA
  T: 978-505-9878    (08)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (09)

<Prev in Thread] Current Thread [Next in Thread>