ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Advances in Cognitive Systems

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Michael Brunnbauer <brunni@xxxxxxxxxxxx>
Date: Sat, 6 Sep 2014 15:40:47 +0200
Message-id: <20140906134047.GA1907@xxxxxxxxxxxx>

Hello Philip,    (01)

On Wed, Sep 03, 2014 at 05:49:52PM -0400, Philip Jackson wrote:
> I care about the desirability of consequences for achieving human-level AI, 
>which is why section 7.9 discusses the problem of technological unemployment.
> I think  probably everyone else who posted or was quoted in this thread, also 
>hopes for desirable consequences of work in the field.    (02)

The way I see it, these are the possible scenarios of achieving human level AI.
A desireable outcome seems either unlikely or dependent on very particular
circumstances.    (03)

1. It is controllable and servile    (04)

The "best case" scenario (maybe not for the AI) which I find highly unlikely.
But maybe I've read too many Science Fiction books unlike Minsky's "The Turing
Option".    (05)

2. It is not controllable and servile    (06)

2.1. Its capabilities basically match ours    (07)

If many of them are built and compete for the same resources, conflict seems
very probable.    (08)

2.2. It outperforms us on many levels    (09)

If many of them are built humanity will become obsolete (probably without
conflict).    (010)

> > Section 2.1 of my thesis discusses 'higher-level mentalities' needed to 
>support human-level intelligence in general.    (011)

I have just finished reading chapter 1-4 of your thesis. Here are my thoughts
so far:    (012)

It seems you are proposing a formal system allowing simple (Turing complete)
operations on a data structure including pointers (allowing self-referencial
loops). Algorithms using these operations can be represented within the
data struture (allowing self-modifying code).    (013)

So far this could just be a normal computer and much of your arguments that
human level AI is possible with TalaMind just seem to point to its Turing
completeness.    (014)

But your data structure corresponds to English sentences described by
dependency grammar and the operations carried out should correspond to all
sorts of strict and fuzzy "thinking" within that data structure needed for
human level AI.    (015)

I can see how a mentalese using natural language can facilitate bootstrapping
a knowledge base used by such a system. Compared with less "semantically
grounded" knowledge representations or internal languages (an extreme case
may be neural networks), the system is more accessible for "debugging".
The system would also be able to learn by communicating with humans from the
start.    (016)

I can even see how the system could work if thought is perceptual. The system
could start with symbols grounded in experience working with a different set
of rules. But one can doubt if the data structures and operations entailed
by natural language are appropriate for the required computation.    (017)

Compared with a knowledge representation based on formal logic, you have a
much more complex mess of *unknown* rules that have to be developed or evolved
and the interplay and evolution of the rules has to be guided. I have some
doubts wether this would really be easier than solving all the problems with
formal logic.    (018)

Chapter 3 demonstrates the immense complexity of natural language
understanding. Even subproblems like interpreting negation, metaphor
or recognizing and handling contradiction are extremely hard. The question
of how we could arrive at a working set of rules and how much of them should
be self-evolved by the system remains open.    (019)

Your TalaMind and John's VivoMind both seem to rely mostly on a single
mentalese for agent communication. If I remember right, this is not what
Minsky had in mind and probably is not what we all have in our biological minds.    (020)

One might hope that the construction plan for intelligence is not too
complicated or that intelligence emerges from some very basic properties.
But if I consider how the brain of other mammals is similar to ours, I doubt
the latter.    (021)

Regards,    (022)

Michael Brunnbauer    (023)


-- 
++  Michael Brunnbauer
++  netEstate GmbH
++  Geisenhausener Straße 11a
++  81379 München
++  Tel +49 89 32 19 77 80
++  Fax +49 89 32 19 77 89 
++  E-Mail brunni@xxxxxxxxxxxx
++  http://www.netestate.de/
++
++  Sitz: München, HRB Nr.142452 (Handelsregister B München)
++  USt-IdNr. DE221033342
++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel    (024)

Attachment: pgp7awgFkhMFU.pgp
Description: PGP signature


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>