Thanks very much for your comments. I'm splitting my reply into two threads, since there are two separate topics, both somewhat different in focus from the original thread ("Advances in Cognitive Systems"). The threads are:
Future Consequences of AI and Automation
P. C. Jackson's Thesis - Toward Human-Level AI
On Sat, Sep 06, 2014, Michael Brunnbauer wrote:
> On Wed, Sep 03, 2014 at 05:49:52PM -0400, Philip Jackson wrote:
> > I care about the desirability of consequences for achieving human-level AI,
> > which is why section 7.9 discusses the problem of technological unemployment.
> > I think probably everyone else who posted or was quoted in this thread, also
> > hopes for desirable consequences of work in the field.
> The way I see it, these are the possible scenarios of achieving human level AI.
> A desireable outcome seems either unlikely or dependent on very particular
> 1. It is controllable and servile
> The "best case" scenario (maybe not for the AI) which I find highly unlikely.
> But maybe I've read too many Science Fiction books unlike Minsky's "The Turing
> 2. It is not controllable and servile
> 2.1. Its capabilities basically match ours
> If many of them are built and compete for the same resources, conflict seems
> very probable.
> 2.2. It outperforms us on many levels
> If many of them are built humanity will become obsolete (probably without
These are useful distinctions, and I agree this is a very important topic for study and research. However, these distinctions aren't mutually exclusive or exhaustive: A human-level AI could be controllable and servile, and also have some capabilities that match ours, some that exceed ours, and some that don't match ours. The extent to which it is controllable and servile would depend on its goals and range of capabilities. A human-level AI would not necessarily have all the same priorities and capabilities that human beings do.
For instance, a human-level AI might be designed to be an "artificial scientist", specializing in a particular domain, e.g. theoretical physics. It might not have any goals outside understanding theoretical physics. It might not have any physical capabilities at all - it could only write papers on theoretical physics, perform computer simulations of physics experiments, etc. Conceivably this system might outperform human theoretical physicists in some ways, but not match human capabilities in any other area. Its success as a theoretical physicist would depend on its ability to communicate with human physicists.
I agree with the following quote from Leslie Valiant's 2013 book:
"There may be some good news for humans in the fact that one can be intelligent in many different ways. It gives us hope that we may endow robots with intelligence superior to ours but only in directions that are useful and not threatening to us. Also, it makes it clear that there is no good reason to want to make robots that are exactly like humans. "
"The most singular capability of living organisms on Earth must be that of survival. Anything that survives for billions of years, and many millions of generations, must be good at it. Fortunately, there is no reason for us to endow robots with this same capability. Even if their intelligence becomes superior to ours in a wide range of measures, there is no reason to believe that they would deploy this in the interests of their survival over ours unless we go out of our way to make them do just that. We have limited fear of domesticated animals. We do not necessarily have to fear intelligent robots either. They will not resist being switched off, unless we provide them with the same heritage of extreme survival training that our own ancestors had been subject to on Earth."
Valiant, Leslie (2013-06-04). Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World (pp. 165-166). Basic Books.
I'll follow up separately, responding to your comments about my thesis, which I appreciate very much.