On Feb 25, 2010, at 11:18 AM, Patrick Cassidy wrote: (01)
> John,
> The disconnect between PatH's view of "meaning" and mine is that
> he is
> content to believe that the meanings of the elements used in programs,
> databases, ontologies (e.g. time, distance, physical object, dollar,
> person)
> all change every time we add a new assertion about unicorns, and I
> am not. (02)
It is not a matter of being content to believe. I am asserting this AS
A FACT, and you are simply in denial about elementary facts of
semantic theory. Now, of course, you are free to invent an alternative
semantic theory, one that supports your intuitions about meanings
being fixed when axioms change, but I would like to see that theory
given some reasonably precise flesh before proceeding to discuss this
matter very much further. (03)
> IMNSHO, this is not a useful interpretation of "meaning" for practical
> programming purposes. (04)
You may be right, but it is PROVABLY the one that corresponds to
formal logics of just about every kind known, and certainly for the
ones that you apparently propose to use. This is just a (well-known)
fact, summarized in Goedel's completeness theorem for FO logic. (05)
> I am well aware that for each new axiom added to an ontology, some
> logical inferences derived from input assertions will change. But
> in my
> view there are some elements - specifically the primitives - whose
> interpretation by programmers, database developers, and domain
> ontologists
> *must* not change when remotely related elements are added (06)
How can it be that (valid) inferences change when meanings do not?
Please give a quick sketch, at least, of how this conjuring trick is
to be managed. (07)
> , and they will
> still be used in the same way in programs and still give **the same
> answers** to queries (08)
But if they give different entailments, then they will NOT give the
same answers to queries. This is just obvious, Pat, surely you can see
this? I mean, it is *mechanically* obvious. Query answering is done by
running an inference engine, right? (09)
> that are of practical importance to the programs they
> are used in - even if they may give different answers to bizarre test
> queries that force reasoning to reach the remote parts of the
> ontology that
> have changed. (010)
OK, so they WILL give different answers. Thank you for finally
admitting the obvious. Judgements of 'bizzareness' are very hard to
make when inferences are being generated. Often the most important
inferences can involve very surprising lines of reasoning, especially
when checking consistency. This is why lawyers make such a good living. (011)
> Semantic interoperability is a practical problem that has
> been addressed in the past by local agreements on interpretation of
> data
> elements without resort to Model theoretic/Tarskian theories (012)
Oh, BS. The model theory might not be stated explicitly, but wherever
it is not being used, semantic ambiguity arises. UML is a famous
example. Databases before Codd are another. (013)
> , and the FO is
> merely a practical tactic to extend the ability to forge useful and
> practical agreements on meaning among a much wider community than is
> possible through local agreements. In spite of the admitted beauty
> and power
> of model theory, if in fact it forces the conclusion that all meanings
> change when a new axiom is added, then it is not a proper
> formalization of
> what "meaning" has to mean in a computational ontology. (014)
On the contrary, this is exactly why it is the right theory. Your
picture of absolute, unchanging 'meanings' fixed for ever is simply an
illusion. Even human natural languages do not conform to this fantasy. (015)
> One way that the apparent problem might be addressed is to
> recognize that
> programs have some expected behavior for their data elements, and
> this can
> be tested with test suites. If some change to an FO causes a change
> in the
> usage of data elements that are not intended to be changed, then
> either (1)
> that change is inconsistent and has to be rescinded or somehow
> modified so
> that the proper usage of other data elements is not affected; or (2)
> programs that are affected by a change in the FO have to use the old
> FO, and
> if they need to interoperate with programs that use the new FO they
> will
> have to take precautions against the undesirable changes.
> But these problems affect interoperability primarily if there are
> changes
> in the FO. The problem of changes to the FO is precisely why I have
> suggested that it is important to identify at the earliest stage as
> many as
> possible of the primitives that will be used for translating among
> multiple
> domain ontologies. That will keep the need for changes to an FO to the
> minimum that is practical.
>
> Another comment from PatH that is puzzling:
>> [PC] > > Perhaps future objections could focus on genuine technical
>> problems
>>> (not analogies with human language), and better yet suggest
>>> alternatives to solving the problem at hand: not just *some* level
>>> of interoperability, but accurate interoperability that would let
>>> people rely on the inferences drawn by the computer. If not a
>>> common
> FO, then what?
>>
>> [PH] > Nothing. This is not a viable goal to seek. It is a fantasy, a
> dream.
>> One does not seek alternative ways to achieve a fantasy.
>
> I had thought that general accurate semantic interoperability (not
> "perfect" interoperability) was the goal of the "Semantic Web", and
> it seems
> that PatH's comment therefore includes the SW in the realm of
> "fantasy". (016)
The picture of the SWeb as one global RDF graph containing a single
consistent account of all of human knowledge is indeed a fantasy. IMO,
the idea of complete universal interoperability is a fantasy also. I
have never believed that the SWeb is trying to achieve "general
accurate semantic interoperability", at least if I understand that
phrase properly. The very first presentation I ever gave to any SWeb
meeting had a slide in it with the caption "When ontologies collide",
where I addressed the fact (which I believe it to be) of the absolute
inevitability of there being ontological differences, inconsistencies
and mismatches on the SWeb. I think that 'local' communiites of
practical interoperability will emerge, sufficient to get their
business done. This is already happening, of course. I also think that
trying to legislate this process or design it top-down by any kind of
semantic Manhattan project is doomed to failure, and that the
development should be left to market forces. These are what will in
fact determine the future of the SWeb in any case, whatever we decide
in this forum and probably whatever is done by government agencies. I
also believe that the best bang for the research buck right now in
this area will arise from studying how to map between large numbers of
alternative ontologies to get practical jobs done, and that this
mapping work is proceeding rapidly, though most of it not in this forum. (017)
Pat H (018)
> Since PatH is a Semantic Web enthusiast, I doubt that was his
> intention, but
> it would seem that some clarification is required.
>
> Pat
>
> Patrick Cassidy
> MICRA, Inc.
> 908-561-3416
> cell: 908-565-4053
> cassidy@xxxxxxxxx
>
>
>> -----Original Message-----
>> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-
>> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
>> Sent: Thursday, February 25, 2010 8:47 AM
>> To: [ontolog-forum]
>> Subject: Re: [ontolog-forum] Foundation ontology, CYC, and Mapping
>>
>> Pat C, Pat H, and Ron,
>>
>> RW> [Pat H's reply to Pat C] does look like some things worth keeping
>>> for more than the time it takes to read an e-mail and press delete.
>>
>> I agree. I'd just like to summarize and emphasize a few points.
>> In the following summary, the quoted sentences are by Pat H, and
>> the unquoted sentences are by me.
>>
>> 1. "Tarskian semantics... is a very general theory of meaning, one
>> that can be applied to a wide range of languages and notations."
>>
>> Yes indeed. In fact, *every* theory of formal ontology that
>> anyone
>> has proposed in the past half century is based on a Tarski-style
>> semantics. That includes Cyc, SUMO, BFO, Dolce, etc., etc., etc.
>> It also includes the semantics for every digital system (hardware
>> or software) that has ever been designed and built since the
>> 1940s
>> -- including those for which the designers had no idea what a
>> formal semantics is or might be.
>>
>> 2. "It is just wrong to draw the contrast between the natural
>> things,
>> on the one hand, and the account provided of those things by a
>> theory of them, on the other, as a difference of **kind**."
>>
>> Yes. Every statement in logic is absolutely precise. The common
>> words used to define the subject in Longman's dictionary (or any
>> other dictionary written by lexicographers for human readers) are
>> usually rather vague and shift their meanings slightly from one
>> definition to the next. But that vague cloud of meaning
>> *includes*
>> the formally defined meaning. The vague meaning covers more
>> cases
>> and it has a fuzzier boundary, but each precise meaning contained
>> in the could is just one very sharply defined sense of the same
>> nature as any other word sense in the cloud.
>>
>> 3. "Computational ontologies are artifacts, written in formal
>> logical
>> notations."
>>
>> Although I agree with that statement, I suspect that Pat C was
>> claiming that programs have some meaning other than what is
>> captured in a formal logic. But it is important to distinguish
>> a declarative statement (in a usual logic) from an imperative
>> statement, such as a command or a machine instruction. But every
>> machine instruction and every program written for a digital
>> computer can be completely defined in the following form:
>>
>> Preconditions, Action, Postconditions.
>>
>> The preconditions and postconditions are statements in logic,
>> which can be formally defined by a Tarski-style semantics.
>> The preconditions describe the state of the computer system
>> before the action, which may be a single machine instruction
>> or an arbitrarily large program composed of many instructions.
>> And the postconditions define the state after the action.
>>
>> The action itself has no meaning outside what can be described in
>> the logic used to state the preconditions and the postconditions.
>> The human commentary may explain what the programmer or designer
>> had intended, but if there is any discrepancy between the
>> comments
>> and the program, there is a bug (or *issue* as MSFT calls it).
>>
>> Pat C has repeatedly made the following claim to justify his search
>> for primitives:
>>
>> PC>> So, if we want the meanings of terms in an ontology to remain
>>>> stable, and **don't** want the meanings to change any time some
>>>> remotely related type appears in a new axiom...
>>
>> PH> But we DO want this! Surely that is the very point of changing
>>> and adding axioms. If meanings are stable across theories, then
>>> what is the point of adding axioms to capture more meaning?
>>
>> I'd like to clarify the kind of change that occurs when more axioms
>> are added. Each addition of an axiom to a theory is a
>> specialization.
>> The change it makes *narrows* the meaning of the terms in it. For
>> example, the term 'Animal' is very broad. By adding more qualifiers
>> (axioms), the meaning can be specialized to 'Dog'. Further axioms
>> can narrow it to 'Poodle'.
>>
>> Those are certainly changes, but they don't go outside the cloud
>> of meaning of the original term. In fact, every dictionary written
>> for human consumption uses such definitions.
>>
>> John
>>
>>
>> _________________________________________________________________
>> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
>> Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-
>> forum/
>> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
>> Shared Files: http://ontolog.cim3.net/file/
>> Community Wiki: http://ontolog.cim3.net/wiki/
>> To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
>> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
>>
>
> (019)
------------------------------------------------------------
IHMC (850)434 8903 or (650)494 3973
40 South Alcaniz St. (850)202 4416 office
Pensacola (850)202 4440 fax
FL 32502 (850)291 0667 mobile
phayesAT-SIGNihmc.us http://www.ihmc.us/users/phayes (020)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (021)
|