On Sun, Sep 07, 2014, Michael Brunnbauer wrote:
> On Sun, Sep 07, 2014 at 02:23:23PM -0400, Philip Jackson wrote:
> > To define and recognize human-level AI, I propose a 'design inspection'
> > approach, rather than a behaviorist test.
> Ah. I thought you wanted to *augment* the classic behaviorist test with design
Quoting thesis section 1.2, p.5:
"While a Turing Test can facilitate recognizing human-level AI if it is created, it does not serve as a good definition of the goal we are trying to achieve, for three reasons: First, as a behaviorist test it does not ensure the system being tested actually performs internal processing we would call intelligent. Second, the Turing Test is subjective: A behavior one observer calls intelligent may not be called intelligent by another observer, or even by the same observer at a different time. Third, it conflates human-level intelligence with human-identical intelligence. These issues are further discussed in §2.1.1. This thesis will propose an alternative approach, augmenting the Turing test, which involves inspecting the internal design and operation of any proposed system, to see if it can in principle support human-level intelligence. This alternative defines human-level intelligence by identifying and describing certain capabilities not yet achieved by any AI system, in particular capabilities this thesis will call higher-level mentalities, which include natural language understanding, higher-level forms of learning and reasoning, imagination, and consciousness."
That is, the Turing Test is not a good definition of human-level AI, and the thesis proposes an alternative, design inspection approach. The design inspection approach could replace the Turing Test, or it could augment the Turing Test by giving insight into a system's abilities that could be difficult to discern with a Turing Test, such as higher-level forms of learning and reasoning, imagination, and consciousness.
As the first quoted sentence suggests, a TalaMind system could participate in a Turing Test. Depending on its knowledge of human sociality, the system might pass a Turing Test; most people might think it was human. If it lacked such knowledge, people might say it seemed to have human-level intelligence, though it did not actually seem human.
Your question indicates perhaps I should have used a different word, rather than "augment". I'll ponder alternative wording, for future use.
> > Since Tala is a conceptual language with a syntax based on English,
> > a TalaMind system could provide traces of its reasoning about a problem,
> > represented as English sentences.
> While this is true as long as the TalaMind is not very sophisticated yet, I do
> not see why the reasoning would always use English sentences that make sense
> to a human observer - especially if it is partly self-evolved.
> > So, this information could be accessible to humans
> It is accessible to humans in principle but I question that it will be
> practical to do this for a fully developed human-level AI for the purpose
> of controlling it.
True, some of the system's reasoning might not be represented as English sentences. In principle the system could explain or summarize such reasoning in English, yet the information might be too detailed or voluminous for people to understand in real-time. On the other hand, the information could also be available to other AI systems, which could monitor and double-check it. This information might be most useful for deliberation about important decisions before action is taken, or for review of decisions after action is taken. Thus its use for control could be limited. Nonetheless, the TalaMind approach has value in being able to provide information of this kind (viz. 7.8).
Phil link to thesis info