Hi Again Rich --
It will take a while to digest all the good thoughts in your email.
One minor point though. You wrote
www.reengineeringllc.com/demo_agents/Zadeh1.agent
That page
appears to demonstrate a fuzzy logic kind of reasoning,
Actually, the example is supposed to illustrate a linguistic alternative to Zadeh fuzzy logic -- there are no numbers.
-- Adrian
Internet Business Logic
A Wiki and SOA Endpoint for Executable Open Vocabulary English Q/A over SQL and RDF Online at www.reengineeringllc.com Shared use is free, and there are no advertisements
Adrian Walker Reengineering
On Mon, Aug 2, 2010 at 5:07 PM, Rich Cooper <rich@xxxxxxxxxxxxxxxxxxxxxx> wrote:
Hi Adrian,
Thanks for your well thought-through post.
Let me respond inline, as below.
-Rich
Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2
Rich,
You wrote...
There needs to be more
work on finding a way to estimate when all (most) of the significant facts and
rules have been identified
Agreed, but as you say, that's a big agenda.
A small but hopefully useful step forward is to provide some low-maintenance
English tied computationally to the logic [1].
Agreed
that natural language (English for us) is a useful tool and English-oriented
logical processing has some significant value to add to everyday SW users’ palettes.
But I think it will take WAY more than JUST English logic to do the job of
human-like reasoning. That’s why my discovery software is oriented toward
finding patterns as well as finding logic in unstructured texts within
databases of factual information and query rules.
(patent
7,209,923: http://www.englishlogickernel.com/Patent-7-209-923-B1.PDF).
Once that is done, it becomes possible to do a form of
human-like natural-language-based reasoning (with sentences like "there is
strong evidence for X, however there is also some evidence against it").
Paradoxically, this works well when what's going on inside the reasoner black
box is actually precise logical deduction.
Here's an example in which Zadeh weights are replaced by linguistic hedges:
www.reengineeringllc.com/demo_agents/Zadeh1.agent
That page
appears to demonstrate a fuzzy logic kind of reasoning, which is truly a useful
concept, but not sufficient for what I am describing. The various kinds of
weightings (degree of belonging, probabilistic reasoning, expected valuation of
alternative strategies …) map onto the rules and facts so that a reasoning
engine can calculate belonging, probability, value etc for any solution
subtree among a set of possible solutions, ranking the solution subtrees
according to those calculations.
But I
think it takes MUCH more than that – it takes an automated understanding of
human nature, and human behavior both generically and specifically, to get the
kind of reasoning the these “stupid/careless” humans use so successfully in
everyday life, while reasoning engines begin tracing circles in their proof
trees.
In
particular, there is MUCH reason to believe that human PERCEPTION is based on
highly organized behaviors, in part physiological or psychological, and in part
cultural. So if I depend on my perceptions to reach a set of facts for further
reasoning, I am very likely to choose different facts than the next person. Part
of that selection process is the emotional evaluation of what options the facts
give us.
The
evidence supporting that comes from anthropology, social science, economics, philosophy,
psychology and law. For example, presently in the news, Blagojevich, Rangel
and Waters are all challenged as to their behavior compared to stated legal statutes,
yet each believes in his/her innocence and proclaims it so.
I have
never been on a legal case in which both sides weren’t convinced that they are
in the right. Otherwise, they wouldn’t be so intent on trying to prove their
cases. The difficult part is in identifying which facts and rules are most
efficacious for presenting to a reasonable judge or juror to “prove” that side’s
legal theories. In part that knowledge of what is efficacious is due to the
psychological knowledge we’ve gained (academically or practically) which we
each feel that others will accept as clear truths. Certainly I have to believe
my testimony in order to be convincing and to clearly state the facts and
conclusions of an opinion. If I don’t believe it, I suggest to the client that
s/he settle the case, because the facts I see don’t support them. But it is
very rare that either side is completely without a good case. Cases wouldn’t
go to court unless the two sides both believe in their respective causes.
So it
isn’t fuzzy logic, or probabilistic logic, or belief systems, or politically correct
beliefs, that really count in human level reasoning, but each of these aspects
is a small part of the whole. I don’t think progress can be made in big steps
until we have a useful theory of human behavior, self-interest, and perception
(at the very least) which can fill in the necessary missing ingredients.
The best
approach I know of to date is the so called “embedded experiencer”, or the
agent that is situated in its environment while experiencing PERCEPTION events
based on physical and belief data. But that alone is still not enough.
JMHO,
-Rich
Looking at your bigger picture, there are some recent papers from Berkeley* that suggest an
approach in which a simulation of part of the real world is used as the
yardstick against which completeness of some rules and facts is measured.
-- Adrian
Yes, I
saw something about simulation faster than real-time, which Jon Awbrey has
posted in this forum or its predecessors. It looks promising in concept, but
still seems to only predict purely logical proofs without worldly substance
considered. There still has to be more deep understanding of the actual
reasoning process people use before real SW applications become reasonably
effective.
* Not by Zadeh, I'll try to track these down if that's of interest.
[1] Internet Business Logic
A Wiki and SOA Endpoint for Executable Open Vocabulary English Q/A over SQL and
RDF
Online at www.reengineeringllc.com
Shared use is free, and there are no advertisements
Adrian Walker
Reengineering
On Mon, Aug 2, 2010 at 3:44 PM, Rich Cooper <rich@xxxxxxxxxxxxxxxxxxxxxx>
wrote:
Hi Adrian, we agree completely on this
issue. But let me amplify.
There needs to be more work on finding a
way to estimate when all (most) of the significant facts and rules have been
identified, IMHO. That only works if you consider evidence for a theory
versus evidence against the same theory. Theorem proving algorithms in
conceptually very simple domains such as math and computer science are missing
the point of the SW. They are useful only after all the facts and rules
have been subjectively selected from the available ones.
In my opinion, the human use of the
semantic web will be to develop the factual basis of a theory through automated
assistance in performing observation, theorizing, classification and
experimentation tasks. Given some really simple deductions that humans
make in between eye blinks, the SW can help collect and organize relevant facts
and rules for later human use. But it isn’t the SW that will demonstrate
proof and substance; it’s the subjective evaluation of people authorized in
some process to produce such evaluations.
Producing a proof graph is a nearly
trivial matter in real world applications, such as legal discovery. The
long, odd, toy proofs of math and computer science don’t represent the
realities of deduction as commonly practiced outside of the academic
world. It’s the validity and verification of facts, the consistency of a
theory with MOST of the facts and rules that matters to people who will use the
SW.
That makes reasoning inherently
subjective, IMHO. Of course, academics can disdain the realities of the
practical world, and state that some “stupid/careless” person screwed up
“their” proof, but he people still believe their own proofs not the supposedly
objective, mechanical proofs of an algorithm.
Reasoning methods are still in their
infancy. The really deep research to come will be focused on mapping
facts and rules to reality, to subjectivity, and to measurable data. What
JFS and others call “speech acts” needs a lot of work to be usefully integrated
into these frameworks if reasoning is to become truly HAL like in the future.
JMHO,
-Rich
Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2
Hi Rich and All --
Rich, you wrote... negation as failure is
more human like...
Indeed, relational databases, on which much of the world economy runs, use a
form of negation-as-failure -- If Adrian is not in the table of employees of
Englishlogickernel, then he is not an employee of said company.
Just commonsense, really.
Moreover, if you attach English sentences to predicates [1], you can help
nontechnical users to know what's going on by answering the question
"Is Adrian an employee of Englishlogickernel?"
with
"Assuming that the table
lists all the employees, he is not an employee of that company"
Cheers, -- Adrian
[1] Internet Business Logic
A Wiki and SOA Endpoint for Executable Open Vocabulary English Q/A over SQL and
RDF
Online at www.reengineeringllc.com
Shared use is free, and there are no advertisements
Adrian Walker
Reengineering
On Mon, Aug 2, 2010 at 1:54 PM, Rich Cooper <rich@xxxxxxxxxxxxxxxxxxxxxx>
wrote:
Hi Ian,
If the intent of the tool's designers is to mimic human perspectives on
knowledge and logic, then negation as failure is more human like, IMHO, than
any existing alternative. A person with no experience in an area normally
is very skeptical of assertions that can't be proven within his/her database
of factual and structural knowledge, and reaches the same conclusion. I'm
sure you've heard it said that you don't know what you don't know, so you
assume you know everything until proven otherwise.
Another way to look at it is that, within the bounds of evidence, a judge or
juror has no basis for any conclusion that is not consistent with known,
demonstrated facts. It is always possible that other information will
surface in the future, but the rational deduction of the present moment has
to be based on known facts, not on missing information.
One consequence of this result is that it is very hard to convince anyone of
a fact which has no familiarity, in specific or general terms, to them
personally. That is why attorneys and laws depend on known facts.
-Rich
Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2
It is even more tricky that this. The failure in
"negation as failure"
doesn't mean failure of a given algorithm, it means not provably true. There
are many decidable logics with NAF. If we have an incomplete reasoner for
such a logic, we are *still* incorrect if we take failure to return
"True"
as being equivalent to "False", because the failure may simply be a
symptom
of the incompleteness and nothing to do with NAF.
Simple example: I am using a logic in which negation is interpreted as NAF.
I have a simple boolean theory in which negation isn't used and which
entails A(x). I ask if A(x) is entailed. My incomplete (for entailment)
reasoner answers "False". If I treat this as entailing that A(x) is
not
entailed, then I am really incorrect -- nothing to do with NAF.
In fact I think that we would be well advised to strike NAF from the record
-- it's really not helpful in this discussion :-)
Ian
On 2 Aug 2010, at 17:45, Ed Barkmeyer wrote:
>
> Ian Horrocks wrote:
>
>> Regarding my claim that reasoners are typically used in a way that is
actually incorrect, to the best of my knowledge none of the incomplete
reasoners in widespread use in the ontology world even distinguish
"false"
from "don't know" -- whatever question you ask, they will return an
answer.
Thus, in order to be correct, applications would have to treat *every*
"false" answer as "don't know". I don't know of any
application that does
that.
>>
>
> Put another way, it is not incorrect to treat "don't know" as
"false",
> if "negation as failure" is a stated principle of the reasoning
> algorithm. We can state the 'negation as failure' principle
generally
> as "if the assertion cannot be proved from the knowledge base, the
> assertion is taken to be false."
>
> Of course, "proved" means that the reasoning algorithm can
derive a
> proof, which depends on the algorithm actually implemented in the
> engine. As Ian mentioned earlier, this kind of "proof"
implies that the
> nature of the reasoning algorithm is, or incorporates, "model
> construction", which is typical of various kinds of logic programming
> engines, but there are many hybrid algorithms.
>
> -Ed
>
> --
> Edward J. Barkmeyer
Email: edbark@xxxxxxxx
> National Institute of Standards & Technology
> Manufacturing Systems Integration Division
> 100 Bureau Drive,
Stop 8263 Tel: +1
301-975-3528
> Gaithersburg, MD 20899-8263
FAX: +1 301-975-4694
>
> "The opinions expressed above do not reflect consensus of NIST,
> and have not been reviewed by any Government authority."
>
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (01)
|