ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Child architecture

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Philip Jackson <philipcjacksonjr@xxxxxxxxxxx>
Date: Sat, 20 Dec 2014 19:10:27 -0500
Message-id: <SNT147-W32D5D730169CCAC7F7E985C1690@xxxxxxx>
Steven Ericsson-Zenith (SEZ) wrote:
> As I noted earlier, this account of "consciousness" is inadequate
> and contributes nothing to the thesis.
>
 
Well, what you stated earlier indicated you did not know what I meant by the term "consciousness". The thesis discusses consciousness in multiple sections, and you did not mention a specific one. So I referenced the section which states what the thesis means by the term. At least it contributes a definition of consciousness, even if you consider the definition inadequate. Let's go on to discuss your specific issues with this account of consciousness.
 
SEZ:
> What, exactly, is an observation?
 
On page 136 in section 3.7.6, which gives the thesis definition of consciousness, footnote 47 says: "Observation may be considered as a physical process that occurs when one physical system changes state or performs an action, as a result of gaining information about the state of another physical system. Observation is intrinsic to computation, because each step of a computation involves observation of symbols."
 
This may be extended to observation as a process of gaining information about the environment, through sensory input, or observation as a process in which a system gains information about itself; e.g. a human brain gains information about the state of its body, from sensory information about the body. One part of the brain might observe information about another part of the brain, etc. It appears that in principle, an artificial system could perform the same kinds of observations.(cf. page 151)
 
SEZ:
> Even informally, from
> where does a machine running a Tala agent perform such
> an observation?
 
Informally, a machine running a Tala agent performs an observation from wherever its sensory mechanisms are located.
 
SEZ:
> Where is there the combination of the
> collected observations such that the agent may
> differentiate one from another?
 
Observations result in data that exists in the physical system hosting the Tala agent. The specific locations and media depend on the architecture of the system.
 
SEZ:
> Assuming that the data
> structures are stored on a hard disk or similar ("in
> it's development of conceptual structures") your claim
> for "understanding" appears to be something in the form
> of bits upon one or more disk platters. By implication
> this suggests that "consciousness" (again, however you
> mean it) is a property of the existence of hard disks
> or, at least, bit patterns.
>
> You see the problem, I trust.
 
I understand your concern, but do not agree with the argument that consciousness is a property of bit patterns, per se. Consciousness is  a dynamic property of a system observing itself and its relation to its environment. This property is evidenced by the creation of conceptual structures that represent a system's observations and thoughts about itself and its environment. In a digital system, at some level these conceptual structures are bit patterns, but for a conscious system these patterns are not static, they change throughout time.
 
Of course, this does not mean that a Tala agent would have the sensory richness and continuity of human consciousness, at least with current technology. It would only have a limited form of "artificial consciousness". For the benefit of other readers, section 4.2.7 discusses Chalmers' "Hard Problem" of consciousness, from the TalaMind perspective.
 
SEZ:
> If you agreed with Peirce (I don't think you really do, BTW, and I
> doubt anyone could speak to Wittgenstein's thoughts on this matter)
> your work would command some greater overall rigor. I fail to see
> the benefit of adherence to an informal and ancient meaning of the
> term "semantics" over the greater rigor imposed upon the term
> subsequently by Carnap.
 
The thesis is as rigorous as I could make it in the time available, but it only proposes a direction toward achieving human-level AI. The door is open for others to explore different directions toward making the approach more rigorous.
  
SEZ:
> You make the same mistakes as the many AI and "GAI" authors that
> have come before you, and you do it spectacularly well. :-)
 
I expect you meant "AGI" (?) -- I am glad I do something well, even if it is a failure in your eyes. Hopefully others will see value in the TalaMind appproach, and pursue developing it. I think it is the best direction for achieving human-level AI, for the reasons discussed in section 7.8. You seem to be in the camp arguing human-level AI is theoretically impossible.(?)
 
Phil
 

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>