ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Self Interest Ontology

To: <doug@xxxxxxxxxx>, "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Rich Cooper" <rich@xxxxxxxxxxxxxxxxxxxxxx>
Date: Thu, 8 Sep 2011 09:02:45 -0700
Message-id: <B1C79F935A40427FBE776DF484F700E6@Gateway>

Dear Doug,

You wrote

    What would be the purpose of this? It would be the

    union of many disjoint, overlapping, and subsuming

    meanings.  Individual meanings would have to be

    determined in order to make sense of an English

    phrase of sentence with the word.

The purpose is to compare two texts comprised of sentences that possibly implement a claim element.  For example, the claim element CE like the following example:

    [CE]: a stored histogram of the user's page viewing behavior,

could be compared to a specification sentence D12,7 such as this one:

    [D12,7]:  The database contains a collection of page URLs and a histogram of the number of clicks which the user made while viewing the page, with a count of the number of purchases which the same user made on that URL

The question asked is: can the CE be understood in view of the D12,7 sentence by reviewer R?

This kind of question comes up often in patent analysis tasks.  The functionality of the lattice is to coordinate the linguistic task of determining (automatically) which sentences are good candidates for comparison with the CE.  By choosing keywords that are relatively rare in claim, yet which appear in this patent's claims, can a program suggest the most relevant sentences for the analyst's attention, thus focusing the patent analyst on the sentences most likely to be productively used, and increasing the productivity of said analyst when faced with many such patent texts. 

Given a database of many claims and claim elements and relevant sentences matching them, additional knowledge can be gleaned from that corpus of corpora by using the lattice I mentioned.  The idea is to organize the semantics of those matching sentences which were selected in the first (keyword matching) phase with a second (semantic modeling) phase which better understands such words as put, get, set and many other English words that are often characterized as "noise" words, or "frequent" words

Remember David Eddy's 1,500 to 6,000 word estimate of the number of concepts in a business?  There are several times more in my estimation, but David is correct in that the average business (and the average American) uses a relatively constrained number of words.  By modeling those frequent words (put, set, get, ) with some very basic FOL assertions that track the meaning of those words in FOL terms of that small kernel vocabulary, it might be possible to focus the analysis much more deeply into the semantic realm.  It isn’t necessary to FULLY understand the sentence as much as to automate the selection of "good" matches compared to bad matches. 

So I really would like to get to the bottom of that lattice question, to wit, how do you construct a lattice comprised of both properties (i.e. constraints use the properties as predicates) and METHODs that interpret the properties, in as regularized and testable a way as possible?  The lattice has to be stable, has to use only the frequent words leaving others as unknown noun and verb phrases, and has to produce consistent and reliable comparisons. 

Starting with the VERY most common words like put, set and get, how would such a lattice be organized?  I mean that in the math sense, not in the sense of any specific tool whether DOLCE or WordNet or other.  However, WordNet's cyclic lattice is a good starting point for selecting a subset that forms a DAG which can be the overall structure of the lattice Now the question is what METHODs go into the nodes of the lattice for purpose of interpreting the frequent words only. 

The Self Interest Ontology is relevant because the patent analyst has a task to perform.  Either she has to affirm the claim element defined in patent P0 as uniquely distinct from past patents, or she has to show that the claim element is also practiced in patent P1.  She has a model of the ordinary person of skill in the art (Posita) who, after reading Px, would be able to implement Px's claim which contains the claim element.  So that is yet another subjective viewpoint to be modeled. 

Self interest is involved in that the analyst and the hypothesized Posita both have (assumed) INTENTION to view the same materials for SUBJECTIVE purposes, i.e., one wants to find that P1 anticipates P0, and the other wants to find that P0 can be implemented.  In related tasks, there are other agents with different assumed INTENTIONs which might conflict with the first agents.  So this is just one example; there are many more viewpoints in the practice. 

The common thread, regardless of intention of the agents, is that the semantics of the claim elements and the semantics of the specification sentences both should be related to the knowledge of the Posita.  The Posita thus serves as a subject for construing a semantic model of the process described in the claim. 

If that is unclear, please feel free to ask questions.  But the real objective of this email is to find a way to construct the lattice that is mathematically sound.  I.e., how do you organize the signatures of sentences containing the frequent words matching the claim element?

HTH,

-Rich

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2

-----Original Message-----
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of doug foxvog
Sent: Thursday, September 08, 2011 6:54 AM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Universal and categories in BFO & DOLCE

On Wed, September 7, 2011 16:02, Rich Cooper said:

> ...

>

> Consider words like get, set, put which have

> multiple conceptual counterparts, I think you

> mentioned there are more than 600 WordNet entries

> for these polysemous three.  A single TYPE object

> designed to implement PUT, in all its polysemous

> glory,

What would be the purpose of this? It would be the

union of many disjoint, overlapping, and subsuming

meanings.  Individual meanings would have to be

determined in order to make sense of an English

phrase of sentence with the word.

For NL, what would be far more useful is a set of

denotations for Put-TheEnglishWord plus denotations

for phrases which include the word "put" whose meanings

can not be derived from the meanings of their components.

> could be constructed from MULTIPLE other

> object TYPEs that represent the various semantic

> interpretations normally given to that word, so

> that a SINGLE TYPE can interpret all 600+ meanings

> by determining the context in which said PUT is

> used, and implement its proper semantic

> interpretation by choosing which edge in the

> lattice to traverse to call the specialized

> subTYPE with the correct semousness.

This seems far more complex than having multiple

denotations.

> Can you explain how such a SINGLE TYPE can be

> constructed without inconsistencies?

It could be constructed as a union of types, but it

would not, imho, be useful.

> -Rich

>

>

>

> Sincerely,

>

> Rich Cooper

>

> EnglishLogicKernel.com

>

> Rich AT EnglishLogicKernel DOT com

>

> 9 4 9 \ 5 2 5 - 5 7 1 2

>

>

>

> -----Original Message-----

> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx

> [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On

> Behalf Of John F. Sowa

> Sent: Wednesday, September 07, 2011 11:02 AM

> To: ontolog-forum@xxxxxxxxxxxxxxxx

> Cc: rick@xxxxxxxxxxxxxx

> Subject: Re: [ontolog-forum] Universal and

> categories in BFO & DOLCE

>

>

>

> On 9/7/2011 12:31 PM, Rich Cooper wrote:

>

>> Generalization removes a property or method from

> the old type

>

>> to create a new type, while specialization adds

> a property or

>

>> method to the old type to create the new type,

> by definition.

>

>

>

> Those two operations can be expressed in logic.

> For example,

>

> the property P could be defined as a conjunction

> of other properties

>

> (or attributes or features or facets or whatever

> you want to call them):

>

>

>

>     P = p1 & p2 & ... & pN

>

>

>

> If you remove a property from P, you get a more

> general property Q

>

> that is implied by P:  If P, then Q.

>

>

>

> If you add another property to P, you get a more

> specialized

>

> property R that implies P:  If R, the P.

>

>

>

> Basic principle:  More general properties are true

> of a larger

>

> number of cases, and more specialized properties

> are true of

>

> a smaller number of cases.

>

>

>

> The more special case implies the more general

> case.  But you

>

> can use logics with more operators than just

> conjunction.

>

>

>

>> ... many programming languages (Delphi) also

> restrict type

>

>> constructors to only have singular inheritance

> which avoids

>

>> the very possibility of introducing consistency

> errors.

>

>

>

> That is a brute-force method.  It's better to use

> development

>

> tools that are guaranteed to ensure all and only

> the valid

>

> generalizations.  There are many such tools.

>

>

>

>> But what about languages with multiple

> inheritance (C++ etc)

>

>> where the new type is a combination of old type

> properties

>

>> and methods, given that the specific new type

> definition

>

>> ALSO leaves out some of the properties and

> methods of the

>

>> old types?  That would make a lattice rather

> than a hierarchy.

>

>> The problem is that the constructors might

> introduce

>

>> inconsistencies...

>

>

>

> What you need are better development tools that

> don't introduce

>

> inconsistencies *and* can detect and eliminate any

> inconsistencies

>

> that may be lurking.

>

>

>

> If you recall the earlier discussions with Rick

> Murphy, he was

>

> recommending methods based on intuitionistic logic

> that generate

>

> combinations that are computationally efficient to

> check.

>

>

>

> Description logics use a different choice of

> operators that

>

> also generate combinations that can be checked

> automatically.

>

>

>

> Another computational method, which uses a logic

> that is a subset

>

> of both DLs and intuitionistic logic, is Formal

> Concept Analysis

>

> (FCA).  The only operator that FCA methods use for

> defining

>

> properties is conjunction.  That creates a very a

> simple logic

>

> that supports highly efficient tools.

>

>

>

> In fact, many people use FCA tools to verify that

> ontologies

>

> defined in OWL are consistent.  But you can also

> use FCA tools

>

> to *generate* the definitions automatically.  See

>

>

>

>     http://www.upriss.org.uk/fca/fca.html

>

>

>

> You can find more info about using FCA to check

> consistency

>

> of OWL ontologies by typing three keywords to

> Google:

>

>

>

>     FCA OWL concept

>

>

>

> You need the word 'concept' to avoid many

> extraneous hits.

>

>

>

> Important caveat:  these logics (DLs,

> intuitionistic logic, and FCA)

>

> are *subsets* of FOL.  They are useful for

> defining the hierarchy

>

> (whether you call it type, concept,

> generalization, or subsumption).

>

> But they are *not* general purpose knowledge

> representation languages.

>

> You need more general logics in order to specify

> everything you need

>

> to say (or to program).

>

>

>

> John

>

>

>

> __________________________________________________

> _______________

>

> Message Archives:

> http://ontolog.cim3.net/forum/ontolog-forum/

>

> Config Subscr:

> http://ontolog.cim3.net/mailman/listinfo/ontolog-f

> orum/

>

> Unsubscribe:

> mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx

>

> Shared Files: http://ontolog.cim3.net/file/

>

> Community Wiki: http://ontolog.cim3.net/wiki/

>

> To join:

> http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePa

> ge#nid1J

>

>

>

>

> _________________________________________________________________

> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/

> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/

> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx

> Shared Files: http://ontolog.cim3.net/file/

> Community Wiki: http://ontolog.cim3.net/wiki/

> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J

>


=============================================================

doug foxvog    doug@xxxxxxxxxx   http://ProgressiveAustin.org

"I speak as an American to the leaders of my own nation. The great

initiative in this war is ours. The initiative to stop it must be ours."

    - Dr. Martin Luther King Jr.

=============================================================

 

_________________________________________________________________

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/ 

Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/ 

Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx

Shared Files: http://ontolog.cim3.net/file/

Community Wiki: http://ontolog.cim3.net/wiki/

To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J

 


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>