John,
I disagree specifically with your belief
that run,
put, set etc require the
representation, storage and interpretation of those 645(?) viewpoints. Instead,
I believe that there are many more than those few (645) senses that have been
recognized officially by the lexicographers, while people who actually use
English all day long probably have far more than that number. But not all
at the same time. Only a few interpretations per person are sufficient.
There are only a few thousand words that
participate in most daily conversations of said person, even for very learned
participant persons. The senses of each of those few thousand words NEED
NOT be distinguished if the method of comparing them is to simply order them
alphabetically. Then even a simple binary search can find a matching word
within log2N time and N space. So the distinguishing among words, even
among millions of phrases, is neither difficult nor computationally
expensive.
But before alpha ordering sentences, some
of the words should be mapped into variables. I use the underscore (_) as
the first letter of any variable symbol to make the simple lexical detection of
variables easy and immediate, and every new lexical is entered into the local
context dictionary. So the phrase
_Actor
ran
_Modifier the
park.
is a sentence which has two variables, and
which can be used as a matching mask to compare any phrase containing the word “ran”
to all other similarly generalized stored phrases in a signature dictionary. For
each match, the Q&Aer has to interpret the new bindings for _Actor and _Modifier,
which it will find in a mature system in the same signature dictionary.
Somewhat like syntactic decomposition
parsers, but without the large amount of complexity if the signature dictionary
has grown to accommodate an acceptable threshold of error reports. To
guarantee convergence, be sure that every phrase which has NO MATCH instead has
a previously reviewed interpretation function, and be sure that every MATCH has
unifiable bindings to existing symbols or to an acceptable threshold number of
new symbols.
Because this approach works effectively,
even though not as perfectly as mathematically correct parsers could in
principle do but presently don’t, it has been surprisingly effective over
tens of thousands of documents I have personally analyzed using dictionary
constrained contextual decompositions substituted for the more traditional parsing,
and instead of converting to a supposed equivalent in FOL. Done in
layers, it works also, but that is another topic.
The point is: people use stored
representations (i.e. a phrasal dictionary) and layers of pattern recognition
(i.e. experiences and specializations thereof) which, accidentally, includes
FOL. But FOL is so much less than the diversity of English in that few
thousand words per day, that it appears incidental, with logic, math, science
and engineering as afterthoughts that showed up after some two hundred thousand
years of refinement on the engine that so evolved. We were lucky.
JMHO,
-Rich
Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of sowa@xxxxxxxxxxx
Sent: Saturday, June 11, 2011 6:25
PM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Run,
put, and set
Rich,
I'm not
sure what you're disagreeing with.
I agree
with your conclusion:
Our software should discuss our needs and then find a way to
fill or ameliorate them. To do so requires, IMHO, logic embedded into
rhetoric.
The three
branches of language are grammar, logic, and rhetoric.
Rhetoric
deals with the purpose of language. Without rhetoric, language would be
useless.
John