Pat Hayes wrote:
> On Sep 3, 2014, at 8:29 AM, John F Sowa <sowa@xxxxxxxxxxx
> > ...Pat Langley, proposed four research challenges that
> > are more realistic than the Turing test:
> > 1. A synthetic entertainer -- a singer-songwriter that people would
> > enjoy listening to.
> > 2. A synthetic attorney that would interview a client, gather
> > information, plan a defense, interact with the judge and witnesses,
> > and prepare and present a closing argument.
> > 3. A synthetic politician that would analyze information about
> > current issues, write and deliver speeches, answer questions,
> > and participate in debates with other candidates.
> > 4. A synthetic teacher that would compose lectures on a subject,
> > present them to students, answer questions during or after the
> > lectures, generate exercises and tests, and grade the answers.
> > I'm not sure about the practical value of synthetic politician,
> > but any AI system that could do even a subset of the tasks that
> > Langley lists for the attorney or teacher would be extremely
> > valuable.
> Valuable for who, exactly? Seems to me all it would do is put
> human attorneys and teachers out of a job. Or, more likely,
> mean that some human teachers and attorneys (those who have
> the funds to buy or rent such a system) have a devastating
> advantage over other human rivals.
This is the general problem of technological unemployment resulting from automation and AI, which has been receiving increased attention over the past several years. Langley's challenges can be viewed a subset of Nilsson's "employment test", proposed in 2005 as a replacement for the Turing Test. Nilsson wrote:
"To pass the employment test, AI programs must be able to perform the jobs ordinarily performed by humans. Progress toward human-level AI could then be measured by the fraction of these jobs that can be acceptably performed by machines."
Nilsson discussed technological unemployment in papers written in 1983 & 1984.
Section 7.9 of my thesis gives some discussion of this problem.
> > Why can't AI do those tasks today? What kind of R & D would help?
To be performed at a level matching human competence, these occupations all require human-level understanding of natural language, itself a problem that is "human-AI complete". Further, each of these requires various kinds of knowledge and thinking not addressed by current AI technologies:
A human-level entertainer must understand human emotions, human motivations, and have some understanding of human history, social norms and issues, economics, etc. as well as the mechanics of storytelling, song-writing, joke-telling, etc.
A human-level attorney must also understand human ethics, emotions and motivations, and have some understanding of human history, social norms and issues, economics, and negotiating skills, as well as the letter of the law, and details of legal processes.
A human-level politician typically needs to combine some of the skills of an entertainer and a lawyer, and have some understanding of problems confronting society, and some understanding of how such problems might be addressed.
A human-level teacher needs some of the skills of an entertainer and a politician, to motivate students, as well as knowledge of a subject domain.
One can imagine very narrow, shallow forms of AI systems for each of these occupations, which would focus on technical mechanics and domain-specific knowledge, without an in-depth understanding of human issues.
In general, all of these occupations require causal and purposive reasoning, and reasoning about how people think, what people know and don't know, what goals people have, etc. Section 2.1 of my thesis discusses 'higher-level mentalities' needed to support human-level intelligence in general. Understanding of sociality, emotions, and human values are identified as more challenging problems, for future research.