ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Child architecture

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Philip Jackson <philipcjacksonjr@xxxxxxxxxxx>
Date: Sun, 21 Dec 2014 11:45:33 -0500
Message-id: <SNT147-W67CC09CD83D2F9C4214BDDC1690@xxxxxxx>
Rich,

You (RC) wrote:
> I would think it might be easier
> to call that “self-aware” instead of “conscious”.
> That could take the extraneous parts back into the
> woodwork while we talk about representations of the
> Child’s Self, and how it looks, what contents are
> there, etc.
 
"Self-aware" is a good alternative, and could be used in many contexts: it is concise and somewhat self-explaining. However, it has two problems in relation to artificial consciousness (AC): the first is that AC requires awareness of the environment, and of the self's relation to the environment; the second is that 'awareness' is vague from a computational perspective, hence the thesis used 'observation', and gave a computational definition of observation. (p.136). So, overall I prefer the term 'artificial consciousness', rather than 'self-aware'. 
 
> > PJ:
> > > Consciousness is a dynamic property of a system observing itself and
> > > its relation to its environment. This property is evidenced by the
> > > creation of conceptual structures that represent a system's observations
> > > and thoughts about itself and its environment.
  
RC:
> It’s that “creation of conceptual structures” that I
> find insufficiently detailed.  In particular, since
> we deduced that the Child needs to have
> communications with other Child objects, and
> presumably with people playing the role of “user” and
> getting involved in the various conversations as
> needed to train the Child objects.
>
> In particular, the naming of those conceptual
> structure should be communicable to other Child
> objects and users.  How can the Child convey meaning
> unless that meaning is produced by the users?  The
> automatically generated concepts would also have to
> have communicable names to participate in
> communications.  It’s that mutual learning of new
> concepts, and their subsequent naming so that the
> concepts can be discussed among the Child objects and
> users.  That naming process is what I would like to
> investigate for the automatically generated concepts
> and relationships.
 
I've discussed these topics in previous posts to this thread, about as well as I can without specific questions in relation to statements made in the previous posts. So, I have nothing further to say at this point. Good luck with your investigations.
 
Phil
 

From: rich@xxxxxxxxxxxxxxxxxxxxxx
To: ontolog-forum@xxxxxxxxxxxxxxxx
Date: Sun, 21 Dec 2014 07:59:06 -0800
Subject: Re: [ontolog-forum] Child architecture

Philip, John and Steven,

 

You wrote:

JS: 

> But it's important to distinguish formal terms in a theory, data
> derived by experimental procedures and observations, and informal
> words whose meanings evolved through thousands of years of usage.
> They are related, but not in any one-to-one correspondence.

Agreed.

 

Remember that we are discussing a Child architecture, which we have deduced would need a plurality of Child objects that would each have a representation of Self in some vocabulary.  I would think it might be easier to call that “self-aware” instead of “conscious”.  That could take the extraneous parts back into the woodwork while we talk about representations of the Child’s Self, and how it looks, what contents are there, etc. 

 
> PJ:
> > Consciousness is a dynamic property of a system observing itself and
> > its relation to its environment. This property is evidenced by the
> > creation of conceptual structures that represent a system's observations
> > and thoughts about itself and its environment.

 

It’s that “creation of conceptual structures” that I find insufficiently detailed.  In particular, since we deduced that the Child needs to have communications with other Child objects, and presumably with people playing the role of “user” and getting involved in the various conversations as needed to train the Child objects. 

 

In particular, the naming of those conceptual structure should be communicable to other Child objects and users.  How can the Child convey meaning unless that meaning is produced by the users?  The automatically generated concepts would also have to have communicable names to participate in communications.  It’s that mutual learning of new concepts, and their subsequent naming so that the concepts can be discussed among the Child objects and users.  That naming process is what I would like to investigate for the automatically generated concepts and relationships. 

 

JS:
> This is an attempt to define a common word by a technical definition.
> It's misleading on both sides: (1) attempts to legislate how people
> use words have little or no effect on practice; (2) using informal
> words in a technical sense is confusing for the reader, who can't
> avoid mixing preconceived ideas with the formal presentation.

 

To clarify, I was not stating this as a definition of consciousness. It was just part of an answer to Steven E-Z's question as to whether I was claiming consciousness resides in bit patterns on hard disks (which I am not).

 

Certainly self-awareness, in the sense of representations of the Child’s structure and contents, would certainly be stored on the hard drive, though a flash drive is much better for performance and power consumption. 

 

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Philip Jackson
Sent: Sunday, December 21, 2014 7:41 AM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Child architecture

 

John,

Thanks for your comments:
 
JS: 

> But it's important to distinguish formal terms in a theory, data
> derived by experimental procedures and observations, and informal
> words whose meanings evolved through thousands of years of usage.
> They are related, but not in any one-to-one correspondence.

Agreed.

 
> PJ:
> > Consciousness is a dynamic property of a system observing itself and
> > its relation to its environment. This property is evidenced by the
> > creation of conceptual structures that represent a system's observations
> > and thoughts about itself and its environment.

JS:
> This is an attempt to define a common word by a technical definition.
> It's misleading on both sides: (1) attempts to legislate how people
> use words have little or no effect on practice; (2) using informal
> words in a technical sense is confusing for the reader, who can't
> avoid mixing preconceived ideas with the formal presentation.

 

To clarify, I was not stating this as a definition of consciousness. It was just part of an answer to Steven E-Z's question as to whether I was claiming consciousness resides in bit patterns on hard disks (which I am not).

 

The problem of defining consciousness is similar to the problem of defining intelligence, which has challenged AI since its inception. Section 2.1 presents the thesis approach to defining 'human-level AI', and explains why the thesis needs to consider consciousness in discussing human-level AI. Section 2.3.4 discusses previous research on artificial consciousness. It introduces Aleksander & Morton's 'axioms of being conscious', which section 3.7.6 adapts for the thesis definition of artificial consciousness, i.e., what is required to say a system has (artificial) consciousness.

 

JS:
> As an example, I would cite Dehaene's term 'signature of consciousness'.
> That's a technical term with 'signature' as the head word (focus).
> The common word 'consciousness' is in a qualifier that serves as a
> reminder that this technical term is related to the informal term. *
>
> In my writing, I use the term 'conceptual graph' as a technical term,
> in which the head word is 'graph'. I also use the word 'concept',
> but I emphasize that it has no formal meaning other than "a node
> in a graph".
>
> If anybody asks what a concept means in the theory, I just repeat:
> "Formally, a concept is a node in a graph. Its only meaning comes
> from the operations that are defined on the graphs."
>
> * Dehaene, Stanislas (2014) _Consciousness and the Brain_,
> New York: Viking.
 

I take your point. In retrospect, perhaps the thesis should have made more use of the term 'artificial consciousness' throughout, to avoid confusion with the public term 'consciousness', and to avoid philosophical debates about whether an AI system really is or is not conscious.

 

Phil


_________________________________________________________________ Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/ Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/ Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx Shared Files: http://ontolog.cim3.net/file/ Community Wiki: http://ontolog.cim3.net/wiki/ To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>