ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] electric sheep

To: "Pat Hayes" <phayes@xxxxxxx>
Cc: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Barker, Sean (UK)" <Sean.Barker@xxxxxxxxxxxxxx>
Date: Sat, 1 Sep 2007 08:31:54 +0100
Message-id: <E18F7C3C090D5D40A854F1D080A84CA44CD1F8@xxxxxxxxxxxxxxxxxxxxxx>

Pat,
        Off course I am not suggesting this as a practical
implementation of a robot nor of natural language programming - this is
a thought experiment to ask the question at what point do we move from
pragmatics to semantics. Is this the same point as when we move from
signal processing to symbol processing? Or where in the semantic layer
cake does syntax give way to semantics and/or pragmatics? (Should we
consider the tag <b> from the point of view of semantics or pragmatics?)    (01)

        Yes "Understanding" is a very loaded word. I particularly want
to avoid using it in the context of complex engineering organizations,
except when I am explicitly talking about the people in them
understanding what is going on. If you want I could compose a very, very
long e-mail transposing the these thought experiments to organizations,
but the robot example requires less explanation, unless you think I am
actually talking about robots.    (02)

        Soft machine = human (I am almost inclined to advise you to read
William Burroughs).    (03)

Sean Barker
Bristol, UK    (04)

This mail is publicly posted to a distribution list as part of a process
of public discussion, any automatically generated statements to the
contrary non-withstanding. It is the opinion of the author, and does not
represent an official company view.    (05)


> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx 
> [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Pat Hayes
> Sent: 31 August 2007 18:18
> To: Barker, Sean (UK)
> Cc: [ontolog-forum]
> Subject: Re: [ontolog-forum] electric sheep
> 
> 
>                *** WARNING ***
> 
> This mail has originated outside your organization, either 
> from an external partner or the Global Internet. 
>      Keep this in mind if you answer this message. 
> 
> >To continue on from (was ckae)
> >
> >     I go into my local greengrocer, and ask the robot 
> assistant for "three 
> >green apples, please". it goes off to the drawer marked 
> apples, checks 
> >the colour against a colour chart, and counts out three (I assume 
> >Decrement accumulator Jump on Zero is built into the machine code). 
> >From the point of pragmatics, so far so good. In this description, I 
> >have no need to ask about semantics, or the concepts the 
> robot is using.
> 
> You really need to find out more about actual robotics and AI 
> generally. These points you are trying to make belong in a 
> discussion from 40 years ago. There is absolutely no way that 
> such a robot could be made to do any of this without having a 
> huge internal system of knowledge represented in some 
> processable form, probably in fact in an ontology.
> 
> >     If I look at the question of "how did it understand 
> me?", I could 
> >propose a simple syntactic solution - the robot expects sentences of 
> >the form <quantity> [<qualifier>] <product>.
> 
> Not a hope in hell that this would work. I have colleagues 
> here who are in fact building systems which converse at just 
> about this kind of conversational level with naive humans 
> (see for example 
> http://www.cs.rochester.edu/research/cisd/projects/trips/architecture/
> for an overview, and the references there for more details). 
> How such systems understand is a very complicated question to 
> answer, involving perception of the social context, the 
> common task, grammar, phonology and discourse structure.
> 
> >
> >     I could do something a little more complex, and add a 
> dictionary 
> >which, among other things, includes the information that a 
> word is one 
> >of {quantifier | qualifier | product* | noise}. (Is this an 
> ontology?)
> 
> No.
> 
> >This probably allows the robot to be more flexible, for example, to 
> >deal with requests such as "A bottle of your best claret, my 
> good man". 
> >I could adduce more complex approaches, and perhaps replace 
> the robot 
> >by a self programming soft machine, which will happy argue semantics 
> >with the next man.
> 
> Self programming soft machine?? I'm sorry, but you really do 
> not seem to have any idea what you are talking about. There 
> is no point in continuing this discussion thread. Learn 
> something about AI and NLP before giving us all such bland, 
> shallow advice about how to design ontologies, please.
> 
> >
> >     At what level of complexity do I need to start 
> concerning myself with 
> >Semantics rather that just Pragmatics?
> 
> At about anything much past a single chip: and if that chip 
> is itself a RISC processor, for example, then inside the chip 
> itself. Programs have semantics as well, you see.
> 
> >At what point would one say
> >the robot "understands concepts", rather than behaves according to 
> >particular pragmatics?
> 
> "Understands" is a very loaded word. I try to avoid it. BUt 
> if we change this to, at what point would one say that the 
> operation of the robot can only be understood or explained by 
> considering in part the semantics of the formalisms it uses 
> internally, then the answer is, at a very early point indeed. 
> One cannot even do recognition of voice phonemes without 
> getting involved with ontological/Krep issues, let alone 
> understand such English as "my good man" (which is well 
> beyond the current state of the AI art: nobody knows how to 
> process irony, humor and indirect allusions.)
> 
> >
> >     I should add that as we develop increasing complex 
> autonomous systems, 
> >we need to create architectures that provide proper separation of 
> >concerns, so this is primarily a question about engineering, rather 
> >than philosophy.
> 
> Yes, we are aware of this. Thank you for your advice, however.
> 
> Pat
> 
> 
> --
> ---------------------------------------------------------------------
> IHMC          (850)434 8903 or (650)494 3973   home
> 40 South Alcaniz St.  (850)202 4416   office
> Pensacola                     (850)202 4440   fax
> FL 32502                      (850)291 0667    cell
> phayesAT-SIGNihmc.us       http://www.ihmc.us/users/phayes
> 
>  
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Subscribe/Config: 
> http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/ Community Wiki: 
> http://ontolog.cim3.net/wiki/ To Post: 
> mailto:ontolog-forum@xxxxxxxxxxxxxxxx
>  
> 
>     (06)

********************************************************************
This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.
********************************************************************    (07)


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (08)

<Prev in Thread] Current Thread [Next in Thread>