ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Advances in Cognitive Systems

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Philip Jackson <philipcjacksonjr@xxxxxxxxxxx>
Date: Mon, 8 Sep 2014 23:10:24 -0400
Message-id: <SNT147-W9248802F7EAD6A8D375207C1CE0@xxxxxxx>
David,
 
Thanks very much for your suggestion:
 
> rather than just referring folks to the thesis a whole, you could post
> a snippet from your thesis, or even explain how the snippet is
> relevant to the question at hand.
  
Here's a list of the higher-level mentalities briefly discussed in section 2.1.2 of the thesis.  Throughout chapters 3,5,6 the thesis discusses how the abilities marked with an asterisk below in particular could be supported by the thesis approach.
 
  • Natural Language Understanding *
  • Self-Development and Higher-Level Learning
    • Learning by induction, abduction, analogy, causal and
      purposive reasoning. *
      • Learning by induction of new linguistic concepts.
      • Learning by creating explanations and testing predictions, using causal and purposive reasoning.
      • Learning about new domains by developing analogies and metaphors with previously known domains.
    • Learning by reflection and self-programming. *
      • Reasoning about thoughts and experience to develop new methods for thinking and acting
      • Reasoning about ways to improve methods for thinking and acting.
    • Learning by invention of languages and representations.  *
  • Multi-level Reasoning
    • Deduction, Induction, Abduction *
    • Analogical Reasoning *
    • Causal and Purposive Reasoning *
    • Meta-Reasoning *
  • Imagination *
  • Consciousness *
  • Sociality, Emotions, Values
 
Here's a summary of what section 7.9 says about economic consequences of AI:  Economists disagree about whether technological unemployment is a real problem or not; many economists think that jobs eliminated by technology will always be replaced by other jobs created by technology. However some economists and technologists have argued that automation and AI can cause long-term unemployment, in many sectors of the economy. Here are excerpts from pages 257-259 of the thesis:
 "Those writing in the past two decades roughly agree at least implicitly, and often explicitly, on the following points for the problem of technological unemployment:
 
1. In the next several decades of the 21st century, automation and AI could lead to technological unemployment affecting millions of jobs at all income levels, in diverse occupations, and in both developed and developing nations. This could happen with current and near-term technologies, i.e. without human-level AI. It has already occurred for manufacturing, agriculture, and many service sector jobs.
 
2. It will not be feasible for the world economy to create new jobs for the millions of displaced workers, offering equivalent incomes producing new products and services.
 
3. Widespread technological unemployment could negatively impact the worldwide economy, because the market depends on mass consumption, which is funded by income from mass employment. LDN theorists vary in discussing and describing the degree of impact.
 
4. The problem is solvable by developing ways for governments and the economy to provide alternative incomes to people who are technologically unemployed. ...theorists have proposed several methods for funding and distributing alternative incomes.
 
5. The problem can and should be solved while preserving freedom of enterprise and a free market economy.
 
6. The problem cannot be solved by halting or rolling back technological progress, because the world’s population depends on technology for economic survival and prosperity.
 
7. Solutions to the problem could lead to greater prosperity, worldwide. ...theorists vary in describing potential benefits: Nilsson envisioned automation and AI could provide the productive capacity to enable a transition from poverty to a “prosperous world society”. Ford suggested the extension of alternative incomes to people in poverty could create market demand supporting a ‘virtuous cycle’ of global economic growth.
 
... In addition to the potential best case event that AI could help eliminate world poverty, the author [PCJ] expects another benefit of human-level AI could result from its application to the development of science. This could help develop scientific knowledge more rapidly and perhaps more objectively and completely than possible through human thought alone. If it is so applied, then human-level AI could help advance medicine, agriculture, energy systems, environmental sciences, and other areas of knowledge directly benefitting human prosperity and survival.
 
Human-level AI may also be necessary to ensure the long-term prosperity of humanity, by enabling the economic development of outer space: If civilization remains confined to Earth then humanity is kept in an economy limited by the Earth’s resources. However, people are not biologically suited for lengthy space travel, with present technologies.
 
To develop outer space it could be more cost-effective to use robots with human-level AI than to send people in spacecrafts that overcome the hazards of radiation and weightlessness, and provide water, food and air for space voyages lasting months or years.
 
For the same reason, human-level AI may be necessary for the longterm survival of humanity. To avoid the fate of the dinosaurs (whether from asteroids or super-volcanoes) our species may need economical, self-sustaining settlements off the Earth. Human-level AI may be necessary for mankind to spread throughout the solar system, and later the stars."
 
Regards,
 
Phil
 
link to thesis info

> Date: Mon, 8 Sep 2014 17:51:43 -0400
> From: whitten@xxxxxxxxxxxxxx
> To: ontolog-forum@xxxxxxxxxxxxxxxx
> Subject: Re: [ontolog-forum] Advances in Cognitive Systems
>
> Phil,
> rather than just referring folks to the thesis a whole, you could post
> a snippet from your thesis, or even explain how the snippet is
> relevant to the question at hand.
>
> David
>
> On Wed, Sep 3, 2014 at 5:49 PM, Philip Jackson
> <philipcjacksonjr@xxxxxxxxxxx> wrote:
> > Hello Michael,
> >
> > Michael Brunnbauer wrote:
> >> On Wed, Sep 03, 2014 at 03:05:02PM -0400, Philip Jackson wrote:
> >> > JS
> >> > > but any AI system that could do even a subset of the tasks that
> >> > > Langley lists for the attorney or teacher would be extremely
> >> > > valuable.
> >> PH
> >> > Valuable for who, exactly? Seems to me all it would do is put
> >> > human attorneys and teachers out of a job. Or, more likely,
> >> > mean that some human teachers and attorneys (those who have
> >> > the funds to buy or rent such a system) have a devastating
> >> > advantage over other human rivals.
> > PJ
> >> >This is the general problem of technological unemployment resulting
> >> > from automation and AI, which has been receiving increased attention
> >> > over the past several years.
> >
> > MB
> >> Or the problem that statements from people working in that field sometimes
> >> suggest to others that they care more about making some dream come true
> >> than
> >> about desirability or consequences.
> >
> > I've added initials above to show who gave each comment in the thread.
> >
> > I care about the desirability of consequences for achieving human-level AI,
> > which is why section 7.9 discusses the problem of technological
> > unemployment.
> >
> > I think probably everyone else who posted or was quoted in this thread,
> > also hopes for desirable consequences of work in the field.
> >
> >> Section 2.1 of my thesis discusses 'higher-level mentalities' needed to
> >> support human-level intelligence in general.
> >
> > MB
> >> If you carry on like this we'll have to rename this mailing list to
> >> "Philip Jackson's doctoral thesis" ;-)
> >
> > It seems a lot of topics are discussed, and questions asked, for which it
> > seems the thesis gives relevant discussions. Whenever that happens, the
> > choice is whether to be silent, or to chip in my two cents...
> >
> > Maybe I should use an acronym, e.g. "RTPCJT", for "Read the PCJ Thesis", and
> > just post something like:
> >
> > RTPCJT section 7.9:
> > http://www.philjackson.prohosting.com/PCJacksonPhDThesisInformation.html
> >
> > Cheers,
> >
> > Phil
> > _________________________________________________________________ Message
> > Archives: http://ontolog.cim3.net/forum/ontolog-forum/ Config Subscr:
> > http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/ Unsubscribe:
> > mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx Shared Files:
> > http://ontolog.cim3.net/file/ Community Wiki: http://ontolog.cim3.net/wiki/
> > To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
> >
> >
> > _________________________________________________________________
> > Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> > Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> > Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> > Shared Files: http://ontolog.cim3.net/file/
> > Community Wiki: http://ontolog.cim3.net/wiki/
> > To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
> >
>
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>