ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Child architecture

To: "ontolog-forum@xxxxxxxxxxxxxxxx" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Philip Jackson <philipcjacksonjr@xxxxxxxxxxx>
Date: Thu, 18 Dec 2014 15:47:59 -0500
Message-id: <SNT147-W70603217BBC80AA4A5052BC16A0@xxxxxxx>
P.S. Given that a conceptual structure has been created as a Tala natural language _expression_, it is possible to name the structure and treat it as unit / building block, for future use in other conceptual structures. This is illustrated in the discovery of bread scenario, when the Tala agent Ben invents names for grain that is a gooey paste ("dough"), baked dough that is a flat object ("flat bread"), and decides to call the sequence of steps for how he made flat bread "the flat bread process".  However, the simulation does not include a process for creating new names - it is written to use names that are already familiar.
 
- PCJ 
 

From: philipcjacksonjr@xxxxxxxxxxx
To: ontolog-forum@xxxxxxxxxxxxxxxx
Date: Thu, 18 Dec 2014 15:11:49 -0500
Subject: Re: [ontolog-forum] Child architecture

I've always been impressed by John Sowa's patience in answering questions. So, on further thought, I'll try to emulate him and respond, from the TalaMind perspective, to Rich Cooper's question about learning and naming concepts, in case readers may be interested. 
 
The question was: "How are new [conceptual/knowledge] building blocks made from old ones and named succinctly?"
 
Since John Bottoms discussed creation of names, I'll discuss the first part. It's a variation of questions Rich asked on December 12 below, about creation of concepts. I responded to those questions on December 13. Briefly, what I wrote was: 
 
"TalaMind is primarily a top-down approach [...] it starts with an initial set of high-level concepts in an intelligence kernel, which guide its interactions with the environment, and guide its development of new concepts. Yet TalaMind is also open to bottom-up development of concepts, from interactions of the associative level with the environment. So, in general, TalaMind is focused on creating higher-level concepts meaningfully [...] Perhaps the best discussion of this is given in Chapter 6, which describes how two Tala agents could discover how to make bread, in a situation where they do not know that grain can be edible for humans."
 
As McCarthy wrote in 1959, "If one wants a machine to be able to discover an abstraction, it seems most likely that the machine must be able to represent the abstraction in some relatively simple way." So the problem of discovering or creating new conceptual structures depends on how they are represented. A TalaMind architecture is open to multiple ways of representing concepts and conceptual structures (viz. thesis pages 35-36). However, the thesis concentrates on use of natural language syntax in Tala expressions to represent concepts. These provide a very concise, flexible, high-level way of representing concepts that can refer to structures of things and events.
 
For example, section 3.6.7.8 discusses how Fauconnier's 'mental spaces' can be represented and used in a TalaMind architecture, to represent interpretations of "Hitchcock liked himself in that movie", supposing a hypothetical movie about Hitchcock's life, in which Hitchcock is played by Orson Welles, and Hitchcock himself plays a minor role (the man at the bus stop).
 
Given multiple ways of representing conceptual structures, there are multiple ways of learning and creating new conceptual structures. The thesis concentrates on 'higher-level mentalities' needed for human-level AI, which include higher-level forms of learning and reasoning, and imagination (section 2.1.2). Higher-level learning includes learning by creating explanations and testing predictions, using causal and purposive reasoning, and learning about new domains by developing analogies and metaphors with previously known domains. Section 3.6.7.9 discusses how Fauconnier & Turner's 'conceptual blends' can be represented and used in a TalaMind architecture, to represent and understand analogies and metaphors, such as "That surgeon is a butcher."
 
Chapter 6 discusses two simulations to illustrate the TalaMind approach. (Disclaimers about the functional scope and content of the simulations are given in Chapters 5 and 6.) The first simulation illustrates conceptual processing in a "discovery of bread" scenario, in which two Tala agents investigate whether grain can be made edible for humans. What they discover in this scenario is a Tala conceptual structure that specifies the simplified steps necessary to make bread: first, pound grain to remove husks from the kernels; next mash the kernels to make flour; next, mix the flour with water to make dough; next, bake the dough to make bread.
 
This story simulation has some indications of the importance of visualization and spatial reasoning: Initially, the Tala agent named Ben relies on an implicit visual perception that a grain of wheat resembles a nut, and based on this resemblance, thinks that perhaps grain is an edible seed inside an inedible shell. Using an analogy of grains to nuts, Ben speculates that perhaps pounding on the grain will release an edible seed/kernel, and verifies this by experiment. 
 
Later in the story, Ben is trying to make thick, soft bread rather than flat bread. Ben thinks that thick, soft bread could have holes or air pockets that would resemble bubbles in the bread, an implicit visualization. Ben then speculates that perhaps adding beer foam to dough could make air pockets in the bread, because beer foam has bubbles. This leads to thick, soft bread, even though Ben’s reasoning does not correspond to the actual mechanism by which beer foam can leaven bread. The simulation illustrates that reasoning by analogy may lead to useful discoveries, prior to knowledge of underlying mechanisms.
 
All of the reasoning and actions in the story simulations are represented using Tala natural language expressions.  However, I agree with John Sowa's criticism that spatial reasoning is needed for human-level intelligence. The TalaMind architecture is open to future extension and integration of spatial reasoning processes and representations. I did not have time to address the topic in the scope of my thesis work, so page 16 of thesis says only that it is a topic for future research. (Incidentally, Fauconnier & Turner's use of conceptual blends to solve the Riddle of the Buddhist Monk, is also an example of spatial reasoning.)
 
Section 4.2.2.4 discusses the potential objection some might make, that human thought is not linguistic in nature, rather it is perceptual. I think there is a strong role for both linguistic and perceptual processing in human-level AI, and an interplay between visual processing, spatial reasoning, symbolic processing, and linguistic processing along the lines described by Minsky's 1974 frame paper. The TalaMind approach is open to multiple forms of processing at the linguistic level, including conceptual graphs and predicate calculus, along with Tala natural language expressions.
 

From: rich@xxxxxxxxxxxxxxxxxxxxxx
To: ontolog-forum@xxxxxxxxxxxxxxxx
Date: Wed, 17 Dec 2014 18:18:45 -0800
Subject: Re: [ontolog-forum] Child architecture

Yes they do, John.

 

Thanks for the, as usual, erudite explanation of onomastic efforts.  I think you have demonstrated a very logical and useful path for progress.  Namely, that the “succinct names” chosen for any concept has to evolve socially, until a large enough population refers to that name in their conversations. 

 

I think that makes sense;  we already know that person to person communication sets up interchangeable meanings, at least informative enough to share with others in our conversational group.  But that makes naming a longer process, taking more conversations and more time.  So in the meantime, choosing an identifier based on order of arrival makes sense.  Over time, these identifiers can be asserted as synonymous with the arrival identifier. 

 

-Rich

 

P.S.  In response to Philip Jackson’s request, I have removed the “TalaMind” element from my original post subject.  He is correct that it is a more general question. 

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John Bottoms
Sent: Wednesday, December 17, 2014 5:59 PM
To: ontolog-forum@xxxxxxxxxxxxxxxx
Subject: Re: [ontolog-forum] TalaMind Child architecture

 

On 12/17/2014 12:39 PM, Rich Cooper wrote:

I am still uncomfortable with the Child design.  Here is an old (2011) from another list that specializes in practicing clinical psychologists:

 

“As some people on the list have already noted, knowledge is based on what

you have already learned or experienced. There is no pre-categorical

interpretation, as though one could just divorce what one sees from anything

ever interpreted, and just rationally perceive without any contamination by

something already thought, believed, experienced, taught, formulated,

etc.... Whenever you see something you use whatever faculties you have based

on prior learning to grasp the thing you are perceiving. The idea of pure

rational thought somehow uncontaminated by anything you have learned is

difficult to entertain. When one even puts one single word to it one is

already using language, metaphors, and concepts (among other things) --

things already known -- and internalized throughout life. It's a problem of

proactive interference, thought that concept is already way too simple.”

 

I try to imagine a way to create new knowledge from old building blocks, but the real problem of simplifying a complex experienced event is to somehow distill that event into a few simple words that evoke the past event, but only in naming it.  Yet the meaning of the event necessarily has to involve the complexity of the original remembered event.  

 

The clipping above implies that the construction of succinct naming of the experienced events is essential to continue to build on top of the stored experience, yet still fit the context into the seven plus or minus two chunks that we can evoke in one thought.  

 

Does anyone have a solution to this problem: how are new building blocks made from old ones and named succinctly?

Rich,
Unfortunately, onomasticists fail us. I monitored their discussions for a few years and the bulk of their work is historical analysis such as how did <some_river> get its name, or why is it called X here but Y there. Maybe the forward-looking onomastic work of technology is relegated to the technologists.

Naming generally falls into two approaches. We can name associatively from a known name which helps with the ease of communication. Or, we can assign an arbitrary name particularly if the names are used by different communities. Here are 3: "dead tree", "pollard" and "stump". They are not related phonologically. Interestingly, they also encode "size" as a property in the element name.

There is also an issue of encoding. We want our words to be sufficiently distinct so as to reduce confusion. This is the principle of assigning vectors and then using those vectors to recover data. I believe it is covered by Space Vector Analysis and is used in search engines. The trick is figuring out how it is done in the mind. We know about things like analogy. But analogies can be drawn across a number of facets.

To me it appears that the mind doesn't pay strict attention to the differences between elements and attributes, clearly, we "noun" verbs and "verb" nouns in English. A given culture may associate all fuzzy animals together while we may assign sets based on DNA analysis. So given these types of diverse groupings of entities, we must assume that naming from culture to culture is equally slippery.

While the concept may not be popular with linguists, I believe that "communities of interest" (cultures) live in ecological niches. And the words they use are those words that survive in that community as it deals with the environment. Among philosophers, they say, "There are no rice gods, where there is no rice." A recognition that words and concepts must have some application to have some meaning, another way to discuss grounding. To me the name is at the least a look-up handle and may or may not impart some meaning.

Do these observations cover or obviate your use of "[naming] succinctly"?


-John Bottoms
 Concord, MA USA


 

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Friday, December 12, 2014 3:02 PM
To: '[ontolog-forum] '
Subject: [ontolog-forum] TalaMind Child architecture

 

I’ve changed the name of the thread to focus it on the specific contents below.

 

Philip you also will need an explanation system in TalaMind, IMHO.  An AIXItl system could also use one to explain its actions and perhaps to explain learned contexts.  But where will the rules come from in English text which will be stitched together to write (or speak) the explanation?

 

AIXItl, even with its opaque semantics, could still be explained by theories which the Child can form independently of AIXItl.  That seems to be what humans do; we justify our actions by stating our most fundamental beliefs – axioms in our theoretical model of the world.  Showing how those beliefs interconnect to form a theory that explains most known experiences is an adequate explanation, IMHO.  It isn’t necessary to correctly explain the Child’s actions, only to present to the User the Child’s current theories of why Baby acts like Baby does.  

 

In other words, communications with users can be completely independent of the actual control system that seeks positive rewards and avoids negative ones.  

 

How do you envision that in a TalaMind Child?

 

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Friday, December 12, 2014 12:52 PM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Report of Ontolog Board of Trustees Meeting

 

Phil

 

Looking at your overview slides, I came across this one, which seems to support Child machine semantic construction concepts:

 

In addition to the Tala conceptual language, the architecture contains two other principal elements at the linguistic level:

•Conceptual Framework. An information architecture for managing an extensible collection of concepts, expressed in Tala.

•Conceptual Processes. An extensible system of processes that operate on concepts in the conceptual framework, to produce intelligent behaviors and new concepts.

 

I am very interested in the AIXItl algorithm, which is theoretically sufficient as a Child machine, but the model is based on a discrete sampled system – the canonical linear system model equations used by all control engineers and electrical engineers who process negative feedback systems.  That makes it opaque how logic encodes knowledge and is fitted into the matrices of the linear system.  It would be nice to have more meaningful ways of generating new concepts for the Child every time a new meaningful experience is encountered by said child.  

 

But looking for building block concepts – those I can fit with others in wide varieties of ways – is pretty much without meaning unless linked in some way to experiences with said constructions.  Those building blocks would necessarily have no individual meaning, and would have to draw their meaning from the context of the experience and from the outcome-to-initial-situation results.  

 

Can you enlarge on the thesis ideas to cover how new concepts are created meaningfully in the Child machine version of TalaMind please?

 

Thanks,

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2

 

 


_________________________________________________________________ Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/ Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/ Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx Shared Files: http://ontolog.cim3.net/file/ Community Wiki: http://ontolog.cim3.net/wiki/ To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J

_________________________________________________________________ Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/ Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/ Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx Shared Files: http://ontolog.cim3.net/file/ Community Wiki: http://ontolog.cim3.net/wiki/ To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>