ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Types of Formal (logical) Definitions in ontology

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Ali H <asaegyn@xxxxxxxxx>
Date: Wed, 25 Jun 2014 11:02:19 -0400
Message-id: <CADr70E02vJy2AQ-rwyhK__FtPu4zaQpWAkrSHgrpem8GYwLOGQ@xxxxxxxxxxxxxx>
Hi Robert,

As noted, I've also encountered difficulty in finding publications on these topics, but a few projects that aspire to similar ends do come to mind:
  1. OpenCog (Ben Goertzel et al) - http://wiki.opencog.org/w/The_Open_Cognition_Project
  2. ACT-R  (John Robert Anderson et al) - http://act-r.psy.cmu.edu/about/
  3. CLARION (Ron Sun et al)- http://www.cogsci.rpi.edu/~rsun/clarion.html <-- though I'm not sure if this project is still under active development

Best,

Ali



On Wed, Jun 25, 2014 at 10:50 AM, <rrovetto@xxxxxxxxxxx> wrote:
@Ali: Thank you. You make good points, as the other have. Wrt artificial reasoning, a hybrid or complementary reasoning system that uses non-fol and fol sounds appropriate and perhaps promising toward creativity-like and free-thinking (as you say) reasoning. Also the previous point about finding a place for fol or syllogistic logic. An unstated concern was over exclusively using a particular logic that is not enough, e.g., deduction, fol, syllogistic, to get the answers and results that the mind and scientific thought achieve. For example, many ontologies i've been exposed to use fol and I haven't seen more expressive or non-syllogstic/non-fol logics used therein. So I wonder. But if hybrid systems are making progress, great. If you have url's or pointers to some publications, i'd be curious. Thanks.


On Wed, Jun 25, 2014 at 10:32 AM, Ali H <asaegyn@xxxxxxxxx> wrote:
Hi Robert,

A couple of quick reactions.

On Wed, Jun 25, 2014 at 9:32 AM, <rrovetto@xxxxxxxxxxx> wrote:
To clarify then, I did not mean artificial languages or "method[s] of reasoning humans invent". I did not mean artificial reasoning. I meant how the mind naturally reasons.
The psychology literature (psych of reasoning I think) and elsewhere if memory serves me, demonstrates (as does our familiarity with daily interactions and inner life) that human beings do not naturally reason according to deduction (or syllgostic logic). Deduction and syllogisms leave no room for creativity which is essential. A set of premises and what follows from that. Nothing outside the box. So my question was why then use it? Why not create an artificial language that more closely approaches the truth? Even if you don't agree that our minds don't naturally employ deduction, the question "What are non-fol/non-deduction/non-syllogistic logics for ontology?" is still valid

But there is also a question of how you choose those premises. Are they simply static? Can you generate them dynamically? What if there is ambiguity or freedom in how you select them? What's the underlying architecture?

Admittedly, the mechanism one uses to choose (or construct) a set of premises for deductive reasoning may itself not be deductive reasoning (though you can layer multiple levels, to have a dedicated layer of FOL-based reasoning select the appropriate set of premises), but therein lies an echo to what JohnS and EdB were saying - these reasoning systems are complementary.

As an example of trying to support a creativity-like / free-thinking module, imagine you are presented with a novel set of inputs. Assuming the inputs are not already in the language of your system (though even if they are), and assuming your internal FOL system comprises of a set of FOL theories connected in a modular architecture, then there is a requisite step of mapping the inputs to your internal set of premises. This mapping process can then choose to interpret or map your input to one or more (or novel combinations) of your internal modules. But to take this further ties into your next statements:


But if syllogistic is used for onto's "full stop" as you said, that's troubling because of the disparity and potential issues wrt ethics and psychology.
Besides wouldn't this mean that in order to get certain answers (that beyond what deduciton or syllogisms can yeild) work-arounds, additions or corrections are needed?

If "We create ontological models of some sets of concerns, precisely because we have tools that implement syllogistic inference reliably" [bold added], then what about creating tools that implement a more realistic and expressive (closer to how our minds work) reasoning/logic?
So the other question was, what are such alternative non-deductive/non-syllogism logics that can be used for ontologies?

I don't see it as an either/or proposition.

One can combine the various forms of reasoning into a hybrid system (though establishing correctness for statements generated by a combination of them is not trivial). As an example, I once implemented a statement that would translate (classes of) NL statements into a HOL form, then pass it off to a physics engine + graphics processor for statistical and calculus based reasoning, before sending the results back to the HOL system for further reasoning and translation back into NL.

Having an FOL-derived base is useful as its model theory is very well known, and allows one to use it as an underlying glue to stitch together the other reasoning paradigms into a coherent and (if you're careful, in some cases) a provably consistent whole.

That said, I've found a dearth of (public?) publications on these types of hybrid reasoning systems

Best,
Ali


On Tue, Jun 24, 2014 at 5:37 PM, John F Sowa <sowa@xxxxxxxxxxx> wrote:
Ed,

I'm glad that we agree on something:

> I have a problem with: “syllogistic logic is not how the mind reasons”.
> It is rather only one of several reasoning mechanisms used by human
> minds.  We also use induction, analogy, statistical reasoning, and a
> number of exotic mathematical methods.

Every method of reasoning that humans invented is supported by the
human mind.  We don't know how to design a computer that can reason
by all the methods humans do.  But any human who designs a digital
computer or a program that runs on it knows how to reason by the
same method as the computer.

> It takes many ingredients to make the soup of human consciousness;
>  we are just growing the leeks.

I certainly agree with the first line.  But I'm not sure about
the leeks.

John



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
 



--
.
(•`'·.¸(`'·.¸(•)¸.·'´)¸.·'´•) .,.,


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
 



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
 



--
.
(•`'·.¸(`'·.¸(•)¸.·'´)¸.·'´•) .,.,

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>