There is more discussion related to this in my thesis, Toward Human-Level Artificial Intelligence
, e.g. related to causal and purposive reasoning. The TalaMind approach supports representing, creating, and processing natural language "why" questions and their answers.
> Date: Wed, 28 May 2014 00:57:43 -0400
> From: sowa@xxxxxxxxxxx
> To: ontolog-forum@xxxxxxxxxxxxxxxx
> Subject: Re: [ontolog-forum] Intentionality Best Practices
> William, Ed, John B, and Simon,
> Intentionality is like an elephant. Everybody latches onto some
> part of it and gives a different description. But all those issues
> fall into place when you ask one simple question: "Why?"
> For any action that any human or animal does, just ask why.
> In every case, the answer is the intention.
> > Even in the philosophy of science [in Chicago], which was then
> > heavily Karl Popper oriented, intentionality was a topic, and
> > part of the 'death knell for positivism". I understood positivism
> > to be the foundation for the silliness of behaviorism in psychology.
> Yes, I was criticizing the positivists. Philosophers seldom call
> themselves positivists today, but many still call themselves
> nominalists -- and many of their views are similar. I say more
> about those issues in http://www.jfsowa.com/pubs/worlds.pdf
> > the enterprise modeling world has done a lot of work on capturing
> > intention, none of which is notably rigorous.
> That qualification is true of about 80% of the work on any kind of
> ontology. For a large part of the other 20%, it's not clear whether
> the rigor is relevant to solving any problem that needs to be solved.
> > I'm still digesting what Brentano had in mind. Further, Dennett
> > and others have also weighed in with their own interpretations.
> Suggestion: When you read whatever they propose as the intention,
> check whether it answers the question "Why?"
> > The best practice for intentionality is probably to take an Intentional
> > Stance - http://en.m.wikipedia.org/wiki/Intentional_stance
> That leads to the following statement by Daniel Dennett:
> > Here is how it works: first you decide to treat the object whose behavior
> > is to be predicted as a rational agent; then you figure out what beliefs
> > that agent ought to have, given its place in the world and its purpose.
> > Then you figure out what desires it ought to have, on the same
> > considerations, and finally you predict that this rational agent will
> > act to further its goals in the light of its beliefs. A little practical
> > reasoning from the chosen set of beliefs and desires will in most instances
> > yield a decision about what the agent ought to do; that is what you predict
> > the agent will do.
> > —Daniel Dennett, The Intentional Stance, p. 17
> I don't disagree with Dennett. But I would note that you would get
> the same results just by asking "Why?" Whenever you get a partial
> answer, keep asking "Why?"
> In Peirce's terms, intention is an example of Thirdness. The question
> why asks for the third member of a triad: X does Y for the reason Z.
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J