ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Reasoning in Reality - was owl2 and cycL/cycML

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Adrian Walker <adriandwalker@xxxxxxxxx>
Date: Mon, 2 Aug 2010 16:23:10 -0400
Message-id: <AANLkTinG6A85uN_Qbvk9A7k6MsdH9NCqiFnG1NPw=tEc@xxxxxxxxxxxxxx>
Rich,

You wrote...

There needs to be more work on finding a way to estimate when all (most) of the significant facts and rules have been identified

Agreed, but as you say, that's a big agenda.

A small but hopefully useful step forward is to provide some low-maintenance English tied computationally to the logic [1].

Once that is done, it becomes possible to do a form of human-like natural-language-based reasoning (with sentences like "there is strong evidence for X, however there is also some evidence against it").

Paradoxically, this works well when what's going on inside the reasoner black box is actually precise logical deduction.

Here's an example in which Zadeh weights are replaced by linguistic hedges:

   www.reengineeringllc.com/demo_agents/Zadeh1.agent

Looking at your bigger picture, there are some recent papers from Berkeley* that suggest an approach in which a simulation of part of the real world is used as the yardstick against which completeness of some rules and facts is measured.

                                 -- Adrian

* Not by Zadeh,  I'll try to track these down if that's of interest.

[1]  Internet Business Logic
A Wiki and SOA Endpoint for Executable Open Vocabulary English Q/A over SQL and RDF
Online at www.reengineeringllc.com   
Shared use is free, and there are no advertisements

Adrian Walker
Reengineering

On Mon, Aug 2, 2010 at 3:44 PM, Rich Cooper <rich@xxxxxxxxxxxxxxxxxxxxxx> wrote:

Hi Adrian, we agree completely on this issue.  But let me amplify. 

 

There needs to be more work on finding a way to estimate when all (most) of the significant facts and rules have been identified, IMHO.  That only works if you consider evidence for a theory versus evidence against the same theory.  Theorem proving algorithms in conceptually very simple domains such as math and computer science are missing the point of the SW.  They are useful only after all the facts and rules have been subjectively selected from the available ones.  

 

In my opinion, the human use of the semantic web will be to develop the factual basis of a theory through automated assistance in performing observation, theorizing, classification and experimentation tasks.  Given some really simple deductions that humans make in between eye blinks, the SW can help collect and organize relevant facts and rules for later human use.  But it isn’t the SW that will demonstrate proof and substance; it’s the subjective evaluation of people authorized in some process to produce such evaluations.  

 

Producing a proof graph is a nearly trivial matter in real world applications, such as legal discovery.  The long, odd, toy proofs of math and computer science don’t represent the realities of deduction as commonly practiced outside of the academic world.  It’s the validity and verification of facts, the consistency of a theory with MOST of the facts and rules that matters to people who will use the SW. 

 

That makes reasoning inherently subjective, IMHO.  Of course, academics can disdain the realities of the practical world, and state that some “stupid/careless” person screwed up “their” proof, but he people still believe their own proofs not the supposedly objective, mechanical proofs of an algorithm.  

 

Reasoning methods are still in their infancy.  The really deep research to come will be focused on mapping facts and rules to reality, to subjectivity, and to measurable data.  What JFS and others call “speech acts” needs a lot of work to be usefully integrated into these frameworks if reasoning is to become truly HAL like in the future.  

 

JMHO,

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2


From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Adrian Walker
Sent: Monday, August 02, 2010 11:50 AM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] owl2 and cycL/cycML

 

Hi Rich and All --

Rich, you wrote... negation as failure is more human like...

Indeed, relational databases, on which much of the world economy runs, use a form of negation-as-failure -- If Adrian is not in the table of employees of Englishlogickernel, then he is not an employee of said company.

Just commonsense, really.

Moreover, if you attach English sentences to predicates [1], you can help nontechnical users to know what's going on by answering the question

"Is Adrian an employee of Englishlogickernel?"


with

"Assuming that the table lists all the employees, he is not an employee of that company"
 
                               Cheers,   -- Adrian

[1] Internet Business Logic
A Wiki and SOA Endpoint for Executable Open Vocabulary English Q/A over SQL and RDF
Online at www.reengineeringllc.com   
Shared use is free, and there are no advertisements

Adrian Walker
Reengineering



On Mon, Aug 2, 2010 at 1:54 PM, Rich Cooper <rich@xxxxxxxxxxxxxxxxxxxxxx> wrote:

Hi Ian,

If the intent of the tool's designers is to mimic human perspectives on
knowledge and logic, then negation as failure is more human like, IMHO, than
any existing alternative.  A person with no experience in an area normally
is very skeptical of assertions that can't be proven within his/her database
of factual and structural knowledge, and reaches the same conclusion.  I'm
sure you've heard it said that you don't know what you don't know, so you
assume you know everything until proven otherwise.

Another way to look at it is that, within the bounds of evidence, a judge or
juror has no basis for any conclusion that is not consistent with known,
demonstrated facts.  It is always possible that other information will
surface in the future, but the rational deduction of the present moment has
to be based on known facts, not on missing information.

One consequence of this result is that it is very hard to convince anyone of
a fact which has no familiarity, in specific or general terms, to them
personally.  That is why attorneys and laws depend on known facts.

-Rich

Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2


-----Original Message-----
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx

[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Ian Horrocks
Sent: Monday, August 02, 2010 10:14 AM
To: edbark@xxxxxxxx; [ontolog-forum]
Cc: Bernardo Cuenca Grau
Subject: Re: [ontolog-forum] owl2 and cycL/cycML

It is even more tricky that this. The failure in "negation as failure"
doesn't mean failure of a given algorithm, it means not provably true. There
are many decidable logics with NAF. If we have an incomplete reasoner for
such a logic, we are *still* incorrect if we take failure to return "True"
as being equivalent to "False", because the failure may simply be a symptom
of the incompleteness and nothing to do with NAF.

Simple example: I am using a logic in which negation is interpreted as NAF.
I have a simple boolean theory in which negation isn't used and which
entails A(x). I ask if A(x) is entailed. My incomplete (for entailment)
reasoner answers "False". If I treat this as entailing that A(x) is not
entailed, then I am really incorrect -- nothing to do with NAF.

In fact I think that we would be well advised to strike NAF from the record
-- it's really not helpful in this discussion :-)

Ian





On 2 Aug 2010, at 17:45, Ed Barkmeyer wrote:

>
> Ian Horrocks wrote:
>
>> Regarding my claim that reasoners are typically used in a way that is
actually incorrect, to the best of my knowledge none of the incomplete
reasoners in widespread use in the ontology world even distinguish "false"
from "don't know" -- whatever question you ask, they will return an answer.
Thus, in order to be correct, applications would have to treat *every*
"false" answer as "don't know". I don't know of any application that does
that.
>>
>
> Put another way, it is not incorrect to treat "don't know" as "false",
> if "negation as failure" is a stated principle of the reasoning
> algorithm.  We can state the 'negation as failure' principle generally
> as "if the assertion cannot be proved from the knowledge base, the
> assertion is taken to be false."
>
> Of course, "proved" means that the reasoning algorithm can derive a
> proof, which depends on the algorithm actually implemented in the
> engine.  As Ian mentioned earlier, this kind of "proof" implies that the
> nature of the reasoning algorithm is, or incorporates, "model
> construction", which is typical of various kinds of logic programming
> engines, but there are many hybrid algorithms.
>
> -Ed
>
> --
> Edward J. Barkmeyer                        Email: edbark@xxxxxxxx
> National Institute of Standards & Technology
> Manufacturing Systems Integration Division
> 100 Bureau Drive, Stop 8263                Tel: +1 301-975-3528
> Gaithersburg, MD 20899-8263                FAX: +1 301-975-4694
>
> "The opinions expressed above do not reflect consensus of NIST,
> and have not been reviewed by any Government authority."
>
>
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
>


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx

 



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
 


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>