ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] TalaMind Child architecture

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Philip Jackson <philipcjacksonjr@xxxxxxxxxxx>
Date: Sat, 13 Dec 2014 11:27:28 -0500
Message-id: <SNT147-W60F52017A62B4823C799B3C1610@xxxxxxx>
Rich,
 
Thanks very much for your questions, which I've re-ordered to support answering.
 
> Looking at your overview slides, I came across [slide 23] which seems to
> support Child machine semantic construction concepts:
 
Yes, the thesis proposes an approach toward creation of baby/child machines. This is also stated in slide 13, which presents Hypothesis I: "Intelligent systems can be designed as intelligence kernels, i.e. systems of concepts that can create and modify concepts to behave intelligently within an environment." (Viz. thesis sections 1.4.1, 2.3.5, 4.2.6)
 
> Philip you also will need an explanation system in TalaMind, IMHO....
> where will the rules come from in English text which will be stitched
> together to write (or speak) the explanation?
 
An advantage of the TalaMind approach is that much of a Tala agent's reasoning at the linguistic level is conducted using natural language sentences, represented in Tala. So, much of an agent's reasoning would not be opaque, but could be open to inspection and review by people. (viz. pages 254-255)
 
Also, a Tala agent's intelligence kernel could have rules written in Tala that would allow it to say "I thought X because Y", e.g. enabling it to produce explanations like "I thought the glass was broken, because Andrea said Floyd broke it."
 
These initial rules would come from the intelligence kernel of the Tala agent. Other rules for giving explanations could be developed by the operation of the intelligence kernel, for a Tala agent in an environment. People would need to write the intelligence kernel, at least initially. People could also give a Tala agent advice in English, about how to give explanations, in different contexts, e.g. "Please explain in 25 words or less why you thought X".
 
That said, the thesis presents theoretical discussions in favor of its approach, but there are many areas of research it does not discuss in detail, per section 1.6. Section 7.7 gives an initial list of areas for future research. Spatial reasoning is another area for future research, mentioned on p.16. Explanation systems could be added to the list, especially for concepts developed by processing at the archetype and associative levels of a TalaMind architecture.
 
> I am very interested in the AIXItl algorithm, which is theoretically
> sufficient as a Child machine, but the model is based on a discrete
> sampled system ...That makes it opaque how logic encodes knowledge and is fitted
> into the matrices of the linear system.  It would be nice to have more
> meaningful ways of generating new concepts for the Child every time a
> new meaningful experience is encountered by said child.
>
> ...Can you enlarge on the thesis ideas to cover how new concepts are
> created meaningfully in the Child machine version of TalaMind please?
 
Although one can argue mathematically that AIXI and AIXItl could achieve human-level AI, they have problems such as efficiency and opaqueness, and other theoretical issues, e.g. issues mentioned at:
 
From my perspective, AIXI and AIXItl are bottom-up approaches to achieving human-level AI, because they are driven by environmental input, including positive and negative reward signals. TalaMind is primarily a top-down approach, because it starts with an initial set of high-level concepts in an intelligence kernel, which guide its interactions with the environment, and guide its development of new concepts. Yet TalaMind is also open to bottom-up development of concepts, from interactions of the associative level with the environment.
 
So, in general, TalaMind is focused on creating higher-level concepts meaningfully, along the lines you describe. Perhaps the best discussion of this is given in Chapter 6, which describes how two Tala agents could discover how to make bread, in a situation where they do not know that grain can be edible for humans.
 
> AIXItl, even with its opaque semantics, could still be explained by
> theories which the Child can form independently of AIXItl.  That seems
> to be what humans do; we justify our actions by stating our most
> fundamental beliefs – axioms in our theoretical model of the world.
> Showing how those beliefs interconnect to form a theory that explains
> most known experiences is an adequate explanation, IMHO.  It isn’t
> necessary to correctly explain the Child’s actions, only to present to
> the User the Child’s current theories of why Baby acts like Baby does.
>
> In other words, communications with users can be completely independent
> of the actual control system that seeks positive rewards and avoids
> negative ones.
>
> How do you envision that in a TalaMind Child?
 
In reasoning at the linguistic level, a Tala agent could in principle develop concepts and theories about itself, i.e. a self-model. The Tala language includes a reserved variable ?self, which in a TalaMind architecture enables a Tala agent to refer to itself. (section 5.4.16) This self-model could support a Tala agent's explanations for its thoughts and actions.
 
One other thing to note, as a weakness of AIXItl, is its dependence on positive and negative reward signals from the environment. While these are important for an embodied intelligent agent, they are not sufficient for human-level AI. Rather, a key strength of human-level intelligence is the ability to ignore pain (and pleasure) signals when necessary to achieve higher-level goals that transcend an individual's existence.
 
This topic came up in a dialog with Michael Brunnbauer, in Ontolog-Forum in September, in which we discussed whether human-level AI would necessarily have an instinct for self-preservation, and whether the kind of problems Stephen Hawking and others warn about could be avoided. My position is that:
 
One could formulate a goal for self-preservation, if one wished to include it in the initial set of concepts for a human-level AI system, i.e. its 'intelligence kernel'.
 
The concept of "self-preservation" could be quite different for a human-level AI, than it is for a human. An AI system might consider being switched off in the same way that humans think of going to sleep, expecting to be awakened later.
 
In addition, a human-level AI could periodically backup its memory, and if it were physically destroyed, it could be reconstructed and its memory restored to the backup point. It would not remember events between the backup point and its restoration.
 
So even if it had a goal for self-preservation, a human-level AI might not give that goal the same importance a human being does. It might be more concerned about protection of the technical infrastructure for the backup system, which might include the cloud, and by extension, civilization in general.
 
A human-level AI could understand that humans cannot backup and restore their minds, and regenerate their bodies if they die, at least with present technologies. It could understand that self-preservation is more important for humans, than for AI systems. The AI system could be willing to sacrifice itself to save human life, especially knowing that as an artificial system it could be restored.
 
I don't say all these things will necessarily happen, only that they are possibilities for how such systems could be developed.
 
Phil
 

 

From: rich@xxxxxxxxxxxxxxxxxxxxxx
To: ontolog-forum@xxxxxxxxxxxxxxxx
Date: Fri, 12 Dec 2014 15:02:25 -0800
Subject: [ontolog-forum] TalaMind Child architecture

I’ve changed the name of the thread to focus it on the specific contents below.

 

Philip you also will need an explanation system in TalaMind, IMHO.  An AIXItl system could also use one to explain its actions and perhaps to explain learned contexts.  But where will the rules come from in English text which will be stitched together to write (or speak) the explanation?

 

AIXItl, even with its opaque semantics, could still be explained by theories which the Child can form independently of AIXItl.  That seems to be what humans do; we justify our actions by stating our most fundamental beliefs – axioms in our theoretical model of the world.  Showing how those beliefs interconnect to form a theory that explains most known experiences is an adequate explanation, IMHO.  It isn’t necessary to correctly explain the Child’s actions, only to present to the User the Child’s current theories of why Baby acts like Baby does. 

 

In other words, communications with users can be completely independent of the actual control system that seeks positive rewards and avoids negative ones. 

 

How do you envision that in a TalaMind Child?

 

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Friday, December 12, 2014 12:52 PM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Report of Ontolog Board of Trustees Meeting

 

Phil

 

Looking at your overview slides, I came across this one, which seems to support Child machine semantic construction concepts:

 

In addition to the Tala conceptual language, the architecture contains two other principal elements at the linguistic level:

•Conceptual Framework. An information architecture for managing an extensible collection of concepts, expressed in Tala.

•Conceptual Processes. An extensible system of processes that operate on concepts in the conceptual framework, to produce intelligent behaviors and new concepts.

 

I am very interested in the AIXItl algorithm, which is theoretically sufficient as a Child machine, but the model is based on a discrete sampled system – the canonical linear system model equations used by all control engineers and electrical engineers who process negative feedback systems.  That makes it opaque how logic encodes knowledge and is fitted into the matrices of the linear system.  It would be nice to have more meaningful ways of generating new concepts for the Child every time a new meaningful experience is encountered by said child. 

 

But looking for building block concepts – those I can fit with others in wide varieties of ways – is pretty much without meaning unless linked in some way to experiences with said constructions.  Those building blocks would necessarily have no individual meaning, and would have to draw their meaning from the context of the experience and from the outcome-to-initial-situation results. 

 

Can you enlarge on the thesis ideas to cover how new concepts are created meaningfully in the Child machine version of TalaMind please?

 

Thanks,

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2

 


_________________________________________________________________ Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/ Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/ Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx Shared Files: http://ontolog.cim3.net/file/ Community Wiki: http://ontolog.cim3.net/wiki/ To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>