ontolog-forum
[Top] [All Lists]

[ontolog-forum] P. C. Jackson's Thesis - Toward Human-Level AI

To: "ontolog-forum@xxxxxxxxxxxxxxxx" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Philip Jackson <philipcjacksonjr@xxxxxxxxxxx>
Date: Sat, 6 Sep 2014 15:01:45 -0400
Message-id: <SNT147-W1856469B3AA1782D0B42EBC1C30@xxxxxxx>
Hello Michael,
 
Thanks very much for your comments. I'm splitting my reply into two threads, since there are two separate topics, both somewhat different in focus from the original thread ("Advances in Cognitive Systems"). The threads are:
  Future Consequences of AI and Automation
 
 P. C. Jackson's Thesis - Toward Human-Level AI
 
On Sat, Sep 06, 2014, Michael Brunnbauer wrote:
>
> On Wed, Sep 03, 2014 at 05:49:52PM -0400, Philip Jackson wrote:
> >
> > Section 2.1 of my thesis discusses 'higher-level mentalities' needed to support
> > human-level intelligence in general.
>
> I have just finished reading chapter 1-4 of your thesis. Here are my thoughts
> so far:
>
> It seems you are proposing a formal system allowing simple (Turing complete)
> operations on a data structure including pointers (allowing self-referencial
> loops). Algorithms using these operations can be represented within the
> data structure (allowing self-modifying code).
>
> So far this could just be a normal computer and much of your arguments that
> human level AI is possible with TalaMind just seem to point to its Turing
> completeness.
>
> But your data structure corresponds to English sentences described by
> dependency grammar and the operations carried out should correspond to all
> sorts of strict and fuzzy "thinking" within that data structure needed for
> human level AI.
 
Agreed. To achieve human-level AI, a TalaMind architecture will probably need to include both formal and fuzzy methods, and may need to go beyond what is possible with Turing machines. Perhaps the most relevant discussions to these topics are in section 4.1 and 4.2.2.4.
 
I say "probably", because the jury is out regarding whether Turing machines are sufficient to achieve human-level AI. If it is, then the necessary fuzzy methods could be implemented in Turing machines, and one might argue they are really formal methods.
 
On the other hand, perhaps some aspects of human-level AI cannot be achieved with Turing machines; perhaps some features need to be implemented with some form of 'continuous computation" (viz. 4.1.2.4, 4.1.2.5). In that case, TalaMind systems would need to go beyond Turing Machines, to achieve human-level AI.
 
For benefit of other readers, section 1.5 introduces the "TalaMind architecture", discussed throughout the thesis. The architecture includes a linguistic level, and also an "archetype" level for cognitive concept structures, and an associative level. Although these levels could each be implemented (at least to some extent) symbolically in Turing machines, non-symbolic processing (e.g. connectionism) could also be used to support each level. (The thesis does not discuss spatial reasoning and visualization, which are important topics, left for future research.)
 
> I can see how a mentalese using natural language can facilitate bootstrapping
> a knowledge base used by such a system. Compared with less "semantically
> grounded" knowledge representations or internal languages (an extreme case
> may be neural networks), the system is more accessible for "debugging".
> The system would also be able to learn by communicating with humans from the
> start.
 
Agreed, to some extent (viz. 7.8). Whether the system could really learn by communicating with humans from the start would depend on its understanding meanings of words, commonsense and encyclopedic knowledge, embodiment, etc. Perhaps a system could have some very limited communication with humans without such knowledge, with limitations that affect NLP systems currently.
 
However, it seems likely that a human-level AI needs to initially learn meanings of common words, and develop its initial commonsense and encyclopedic knowlege, in much the same way that humans infants do: very gradually, through interaction with an environment and with humans, in a self-developing, self-extending process. This of course is much easier said than done, but it is one area where I think research needs to be focused to achieve human-level AI. It is one reason the thesis adopts hypothesis I, and discusses self-developing, self-extending systems.
 
Once a human-level AI develops such knowledge, the knowledge could in principle be copied and re-used in other human-level AI's, so that they wouldn't have to go through the same learning process.
 
> I can even see how the system could work if thought is perceptual. The system
> could start with symbols grounded in experience working with a different set
> of rules. But one can doubt if the data structures and operations entailed
> by natural language are appropriate for the required computation.
>
> Compared with a knowledge representation based on formal logic, you have a
> much more complex mess of *unknown* rules that have to be developed or evolved
> and the interplay and evolution of the rules has to be guided. I have some
> doubts wether this would really be easier than solving all the problems with
> formal logic.
 
The TalaMind approach does not preclude using formal logic (e.g. predicate calculus or conceptual graphs), nor does it preclude other notations, diagrams, etc. to support representation and processing. The thesis does not claim that natural language is always the best way of representing concepts.
 
One key issue is that however knowledge is represented, human-level AI involves a "complex mess of unknown rules that have to be developed or evolved and the interplay and evolution of the rules has to be guided." The challenge is not made any simpler by trying to solve all the problems with formal logic. If it were, perhaps human-level AI could already have been achieved, since people have been trying to solve the problems of AI with formal logic since the 1950's.
 
As I noted in an earlier thread, one advantage of the TalaMind approach is that it leverages a point stated by John Sowa: "Natural languages have words for all the operators of first-order logic, modal logic, and many logics that have yet to be invented." Since Tala includes these words, and NL syntax to support them, it facilitates AI conceptual processing for modality, causality, purposive reasoning, self-reference, conjecture, meta-reasoning, etc. These topics can be studied using Tala, without having to invent new formal logics, and without people having to understand new logic notations.
 
Hypothesis I of the thesis is that "intelligent systems can be designed as ‘intelligence kernels’, i.e. systems of concepts that can create and modify concepts to behave intelligently within an environment." That is, the complex set of unknown rules needs to be created by a self-developing, self-extending process, as a result of interaction with an environment, and with humans. To achieve human-level AI, one focus of research needs to be on developing such systems.
 
I called such systems "intelligence kernels" in 1979. Nowadays, AGI researchers use the term "seed AI", a name developed independently. Nilsson described essentially this idea in 2005 and referred to earlier proposals along the same lines dating from 1998. (Viz. thesis section 2.3.5)
 
> Chapter 3 demonstrates the immense complexity of natural language
> understanding. Even subproblems like interpreting negation, metaphor
> or recognizing and handling contradiction are extremely hard. The question
> of how we could arrive at a working set of rules and how much of them should
> be self-evolved by the system remains open.
 
Agreed, the question remains open. Much work remains to be done. However, we have an existence proof that the problem can be solved, since it is solved by the average child learning a natural language. In principle, human-level AI could solve the problem the same way, if the required computational resources can be provided with existing technology. (Again, a question dependent on whether human-level AI can be achieved with computers, or requires more exotic technologies.)
 
> Your TalaMind and John's VivoMind both seem to rely mostly on a single
> mentalese for agent communication. If I remember right, this is not what
> Minsky had in mind and probably is not what we all have in our biological minds.
 
The TalaMind approach does rely mostly on the Tala mentalese for agent communication at the linguistic level of the architecture. I say "mostly" because the approach does not preclude other notations, as mentioned above. I cannot speak for VivoMind.
 
Agreed, this is not what Minsky discussed. He argued that because agents in a society of mind would be simple and diverse, in general they would not be able to understand a common language. Thesis section 2.3.3.2.1 discusses the society of mind paradigm, and compares and contrasts the TalaMind approach with Minsky's proposal.
My thesis does not make any claims regarding what we have in our biological minds.
 
> One might hope that the construction plan for intelligence is not too
> complicated or that intelligence emerges from some very basic properties.
> But if I consider how the brain of other mammals is similar to ours, I doubt
> the latter.
 
I would tend to agree that human-level intelligence probably does not emerge from a system with very simple, basic properties.
 
Regards,
 
Phil Jackson
 
Thesis Information (for readers new to the forum):
http://www.philjackson.prohosting.com/PCJacksonPhDThesisInformation.html 

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>