John,
Further clarifications interpersed: (01)
Patrick Cassidy
MICRA, Inc.
908-561-3416
cell: 908-565-4053
cassidy@xxxxxxxxx (02)
> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
> Sent: Wednesday, January 21, 2009 1:49 PM
> To: [ontolog-forum]
> Subject: Re: [ontolog-forum] Next steps in using ontologies as
> standards
>
> Pat,
>
> Your multiline expansion of my two line simplification of your
> proposal does nothing to address the issues involved:
>
> PC> I have not suggested creating a terminology by using words
> > as representatives of primitive concepts, and at no time have I
> > *ever* confused words or other terms with the concepts that
> > they label, nor with the logical representations of the concepts
> > that they label. I have suggested creating a common foundation
> > **ontology** that *includes* logical representations of the concepts
> > that are also represented by the Longman defining vocabulary.
>
> Several points:
>
> 1. If you have a one-to-one mapping of the Longman words to your
> concepts, you have solved nothing by changing the term 'word'
> to the term 'concept'. (03)
[[PC]] Absolutely correct. A Guo showed, many of the defining words used in
the Longman definitions are used (indifferent definitions) in more than one
sense. Of course, they are used in many more senses in general text. The
different concepts (the senses labeled) by one Longman word, if considered
as a primitive required for ontological specification of meaning, will each
have a different label in the ontology. (04)
>
> 2. The Longman definitions are hopelessly vague and frequently false.
>
[[PC]] In many cases true. But my own attempts to create unambiguous and
true English-language definitions from those words have convinced me that
they are in fact an adequagte inventory to create unambiguous and accurate
definitions, when that is one's goal. (05)
> 3. That vagueness is not a problem for human readers because they
> use their background knowledge to compensate for the vagueness.
> But that solution is not possible for a computer system.
>
[[PC]] Yes, of course. The ontology elements must be as unambiguous as
required for the applications contemplated. One application of the FO is
translation among many different domain ontologies. (06)
> 4. If you replace the Longman definitions with unambiguous formal
> statements, you will get one precise definition of each term
> and thereby lose the option of the multiple microsenses that
> a human reader can use to supplement the vagueness.
> (07)
[[PC]] Guo's thesis study indicated that there are in fact relatively few
senses required for the purpose of creating definitions of more complex
concepts. Microsenses do not appear to show up as significant in the
"language game" of creating linguistic definitions. (08)
> In short, you'll take a dictionary that is useful for humans and
> replace it with a selection of underspecified concepts that are
> of no value for a computer system.
>
[[PC]] No, the concepts represented in the FO will not be underspecified.
They will be as precise as the domain ontologist or consortium member
believes is necessary. And they can be refined to be more precise as the
need becomes apparent. (09)
> PC> As Pat Hayes pointed out, all of his time theories can be
> > "expressed by" (Pat Hayes's phrase) axioms containing only
> > three classes, time point, time interval, and duration.
>
> Yes, indeed. But his axioms (or anybody else's) relate those terms
> without assuming any prior definitions. For example, the axiom F=ma
> relates force, mass, and acceleration without requiring any prior
> definition of any of those three terms.
>
> But as Pat H said, and I strongly agreed, many of the theories
> that use those terms (Newtonian, relativistic, quantum mechanical,
> etc.) relate them by means of different axioms that are inconsistent
> with one another. The kinds of primitives you have suggested and
> the vague definitions that accompany them are of *ZERO* value for
> helping a professional in any field write axioms.
> (010)
[[PC]] I strongly disagree, but I would be quite interested in learning what
examples you know of where a professional tried to use logical
specifications of primitive concepts to write axioms and found them useless.
Specifics, please.
What I find disturbing in many of your criticisms is the notion that
something was tried by someone, somewhere and didn't work - therefore we
should never try again, but any method. I have not found any specifics in
the anecdotal evidence you present to assert the impossibility of this or
that, just assertions that one method (whose details are not public) didn't
work. As *evidence* it is utterly useless, worse than hearsay, because no
details are presented.
If you know the details, you can make a useful contribution to this
subthread by telling us, in some case of an attempt to use the Cyc ontology:
(1) what was tried, what was the goal; (2) how was Cyc planned to be used;
(3) how much effort was put into that particular project; and (4) *why* was
the project terminated, or Cyc deemed unsuitable; and (5) what other means
was used to accomplished the intended goal, if any?
That would be useful information. Please, no more vague anecdotes. (011)
> PC> What we don't know now, but can discover by the consortium process,
> > is just how large a group of logically consistent ontology elements
> > can be agreed on, whether they are considered "basic" or not.
>
> The idea of a consortium for writing definitions was tried by the
> Japanese EDR project. They spent billions of yen on that project,
> they produced a dictionary of 410,000 concepts mapped to English and
> Japanese, CSLI has a copy of it, and nobody has found any commercial
> use for it:
>
[[PC]] I never expected the EDR to be any more useful for reasoning than
the WordNet, its purpose differed in fundamental ways from the FO proposal.
It is a project that is essentially irrelevant to the consortium proposal
for a common FO, except perhaps to demonstrate that a large group *can*
build some artifact on which they agree. In that case, it should encourage
us to try a consortium on a project more suitable to use in logical
inference. (012)
>
> John
>
[[PC]] Pat (013)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (014)
|