Ali,
Thanks for your comments, I suspect they represent what some
others are also thinking.
A response to a few points that were perhaps not clear
enough:
[AH] >> However, I'm not sure I accept your
analogy that English == Common Theory. For me the analogy is more along the
lines of English == Common Logic or RDF or OWL2 and the descriptions _using_
English are the actual ontologies we're speaking of - i.e. the words used
in English, say the vocabulary V == ontology O. To me this is a more apt
analogy.
[PC] Well, as I said the analogy may mislead. I do not think
that English is analogous to an FO – I do think that the set of concepts
represented by the **defining vocabulary** (2148 words of English) used in Longman’s
is analogous to the concept representations of an FO; because of ambiguity, Guo
estimated that the number of different senses used in the definitions was close
to 4000. English is not analogous to RDF: perhaps the **grammar** of English
is analogous, but the grammar ***plus*** the basic vocabulary of 2148 words is
what I consider analogous to the FO. But if this still seems wrong, let us
both forget it. The technology of an FO can be discussed on its own terms.
[AH] The solution your post above seems to suggest is
actually very similar to the interlingua ontology idea developed in the late
90's, though perhaps that idea was too soon given the state of ontology
development. It has since been significantly updated, altered and revived in
the form of the OOR or COLORE projects
[PC] I am aware of several projects that aim to integrate
multiple ontologies in some form, including the SUO and IEEE project in which I
participated, but am not aware of any that have adopted the tactic of agreeing
on the most primitive ontology elements and using those as a set of building blocks
with which to construct the meanings of all the other ontology elements. I
may have missed such a project, and if you have a specific reference to a
project that has adopted that tactic, I would much appreciate the pointer.
[AH] >> Moreover, as you note below, the number of
primitives seems to taper much like y = log (x). However, this doesn't mean
that those set of primitives are consistent with one another. And there's the
rub.
[PC] Well, that is not guaranteed, but as I mentioned, where
there appear to be logical inconsistencies, the tactic is to try to logically
represent the inconsistent theories using some common set of primitives, and
then the inconsistent theories themselves will not be part of the ontological
commitment of the FO, but will be described by the FO in some extension. We
don’t know for certain whether there will be irreconcilable differences
so large that it prevents a large community of users from agreeing an **any**
FO. But after years of inquiring, I still haven’t seen any examples of different
theories that cannot be described by some common set of elements. If such
theories exist, that may mean that there will be some ontologists that cannot
use the common FO (I mentioned that possibility). But we already know that it
is likely that for various reasons, there will be at least some groups that don’t
want to use the FO. No problem, the FO is only for those groups who consider
it *important* to interoperate accurately. All we need is a large
enough group so that third-party developers of utilities and applications will
make using the FO desirable for ever larger numbers of people. A user base
much smaller than the whole world will be quite adequate. The important thing
is to get ***some*** widely used FO so that (a) we can test its functionality;
and (b) those who want their domain ontologies to be as compatible as possible
with many other ontologies will have a well-tested way to achieve that.. As of
now we have none.
[AH] >> Thus, while we might not have all agreed on
the common set of primitives, we're slowly understanding where my primitives
agree with yours and where they disagree and in what ways. Unsurprisingly, this
is also enabling my ontology to be able to communicate effectively with your
ontology in much the way you described above.
[PC] Yes, there are other tactics to create some kind of
interoperability, but (a) as you mentioned, the slow accumulation of agreements
is ***very*** slow (we’ve been discussing ontology mapping and
integration for 15 years); (b) there is no guarantee that anything particularly
useful will come of that tactic either; and (c) either the mappings are
semiautomatic and extremely costly or they are automatic and very inaccurate.
My emphasis has been on **accurate** general semantic interoperability that can
support automated inference allowing computers to make mission-critical
decisions without human intervention. I have what I consider to be good
reasons to believe that that is simply **not possible** without a common
ontology – and the use by all parties of a common foundation ontology as
the inventory of basic concept representations that can be combined to create
the complex domain ontology elements *does* provide an automated
integration mechanism that can create in effect a merged common ontology from
different communicating ontologies, by sharing the local ontology element structures
that describe the communicated information but are not in the basic FO. All
ontologies that can interpret the basic FO elements properly will be able to
interpret the constructed domain elements – all being interpreted in the
same way for each communicating ontology.
[AH] >> The above said, to me, a better allocation
of resources, instead of a trying to achieve broad consensus from the get go,
would be to analyze what currently exist, figure out what the primitives being
used might be, and figure out the links, kinks and winks between them - i.e. it
might be more useful to try to derive cohesion from these disparate efforts by
digging in and fleshing things out.
[PC] That is precisely what phase 1 of the project would do, by
getting many different groups to find what basic ontology they can all (or most
of them) use in common. But instead of taking another 15 or 150 years it could
be done in a few years with a coordinated project.
We can continue to let a thousand flowers bloom on their own,
but though they may be individually pretty, they won’t talk to each
other.
Pat
Patrick Cassidy
MICRA, Inc.
908-561-3416
cell: 908-565-4053
cassidy@xxxxxxxxx
From:
ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Ali Hashemi
Sent: Wednesday, January 27, 2010 5:11 PM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Fw: Context in a sentence
A couple of points here, comments below.
On Wed, Jan 27, 2010 at 4:28 PM, Patrick Cassidy <pat@xxxxxxxxx> wrote:
[PC] I never said that, and I don’t
believe it either. But regardless of how one chooses to talk about the
world, any two communicating agents must talk about it **in the same
language**, or else fail to communicate accurately
....
[PC] Not necessarily, though see the next
paragraph. I am sure that different people do have different fundamental
assumptions and different beliefs, and use words in different ways, and all of
that creates a great risk of faulty communication, as one can observe in many
situations such as this forum. In fact, it is probably impossible to
people to have **exactly** the same internal states, though with effort we can
get close enough to each other for communication accurate enough for most
practical purposes (a least when they are not trying to score debating
points). But computers **can** have identical sets of theories (the
computer version of beliefs), since the computer owners are in complete control
and only have to choose to use the same set of theories in order to communicate
accurately. My point was that since we do have control over our
computers’ theories, we can get them to communicate accurately by using the
same sets of theories. That doesn’t mean that there is only **one**
true set of theories, it does mean that any group that agrees that **some**
particular set of theories is adequate to express what they want their
computers to communicate can use that set to enable accurate computer
communication. If there are some who feel that the theories are not
adequate for their purposes, they can choose not to communicate accurately with
the community that does use the common language – or make some
adjustments to get an approximate interpretation – or better yet,
try to collaborate with the others to find some set of theories that
includes their needs as well. But once there is **some** community
that uses a common foundation ontology as the basis for accurate computer communication
among useful programs, it is likely that one such foundation ontology will be
the most commonly used, and therefore will provide the greatest audience.
If a different foundation ontology is used in some specialized community, it
can become the preferred basis for communication there, but that community will
then not communicate accurately with the other, larger audience. I expect
that one common foundation ontology will eventually dominate the computer
communication media for the same reason that English dominates in international
scientific conferences – it gives the greatest value per unit effort
expended. That situation may not last forever – English may be
replaced by, say Chinese . . . that depends on unpredictable factors.
I'm very glad you acknowledge that it is probably impossible
for people to have exactly the same internal states, let alone descriptions of
what is. As you note in the first comment above, all that is really required
for two agents to communicate is to agree on what they're communicating
about.
However, I'm not sure I accept your analogy that English ==
Common Theory. For me the analogy is more along the lines of English == Common
Logic or RDF or OWL2 and the descriptions _using_ English are the
actual ontologies we're speaking of - i.e. the words used in English, say the
vocabulary V == ontology O. To me this is a more apt analogy.
The solution your post above seems to suggest is actually
very similar to the interlingua ontology idea developed in the late 90's, though
perhaps that idea was too soon given the state of ontology development. It has
since been significantly updated, altered and revived in the form of the OOR or
COLORE projects.
As you have noted, the way for two agents to communicate
effectively is via determining where they agree and disagree on their theory
(the application of English to describe a particular domain / system etc.)
Moreover, as you note below, the number of primitives seems
to taper much like y = log (x). However, this doesn't mean that those set of
primitives are consistent with one another. And there's the rub.
As it stands we have many people who are working to develop
this interlingua; we are in effect, defacto developing exactly the set of
primitives you speak of, except in a not very coordinated manner and without an
overarching framework. While this lack of cohesion introduces some problems, it
also means work can progress without waiting for consensus.
Coincidentally, tools are developed, released and
implemented to address exactly those problems that arise from said lack of
cohesion - notably efforts in semantic mappings.
Thus, while we might not have all agreed on the common set
of primitives, we're slowly understanding where my primitives agree with yours
and where they disagree and in what ways. Unsurprisingly, this is also enabling
my ontology to be able to communicate effectively with your ontology in much
the way you described above.
Alas, this is slow going, and it can sometimes be
frustrating that there is no overarching cohesion, but then we have wonderful
communities like ontolog who are linking people together and providing a
platform such as the OOR to collate, collect and hopefully, ultimately connect
all these different primitives.
The above said, to me, a better allocation of resources,
instead of a trying to achieve broad consensus from the get go, would be to
analyze what currently exist, figure out what the primitives being used might
be, and figure out the links, kinks and winks between them - i.e. it might be
more useful to try to derive cohesion from these disparate efforts by digging
in and fleshing things out. An idea i'd floated before to Nicola Guarino and
Michael Gruninger, but I unfortunately haven't pursued with the requisite vigor
- is that I would love to see an issue of an ontology journal, say Applied
Ontology, devoted to cataloging who is doing what, what the major perspectives
in ontology are, and what the major contributions from various research groups
across the world are. Instead of a review paper, a review _journal_ of
where we are, who we are and what we've done.
I think such an effort would go much further in fostering the requisite
cohesion than trying to derive consensus first.
So, while I believe your proposal is valuable, I'm not sure
it'll be able to attract the requisite momentum; not to mention, there seems to
be a lot of work being currently done which already parallels what you
envision.
Patrick,
I suspect that the claim "we
need a common foundational ontology" is exactly equivalent to David's
quotation "(1) the entire meaning of a
message is self-contained in said message", since if we have a common foundational ontology we should be
able to make statements in the ontology that are true irrespective of context.
I would interpret C.S.Peirce's definition
as saying that communication happens when an agent sends symbol A and it
invokes a knowledge based procedure leading to symbol B in a second agent, and
both A and B refer to the same (concept) C.
Caveat - I do not claim that this is
Peirce's interpretation, or even that he would agree with it, but its my B
to his A.
The point being is that context (what ever
that is) defines the inference task in which A is used to invoke B. Even on the
Semantic Web, the context that it is the semantic web defines particular processing
protocols which invoke a system that understands OWL or RDF rather than one
that only understands HTML or even EDIFACT.
However, more broadly, I would reject the
idea that there is only one way to talk about the world. In this context, I
would say there are in fact two distinct types of ontolology, those that talk
about the world, and those that model the world, and that these two views of
ontology are incompatible. (A foundation ontology is a model of the world).
Perhaps, following Protégé, we could distinguish them by having as TOP
"word" and "thing".
This is not to say that I don't think
common ontologies are a bad idea - they are essential for engineered
applications - or rather, applications engineered to match a particular human
or business context. However, they are not a universal panacea simply because
different contexts will be understood through different ontologies.
One might propose that, because we are all
the same type of creature (human) that we must therefore all use the same
mechanisms for thought, and this must lead to the same foundational concepts.
This would imply firstly, that the variation in humans is too small to allow
for different mechanisms for thought, and secondly, that the mechanisms of
thought are entirely conditioned by our genetic inheritance and are not
affected by environment. Both questions should be scientifically verifiable,
and indeed may already have been determined, however, this is not my area of
expertise, although I would strongly suspect both hypotheses to be false.
So, no context free language, no common
foundational ontologies.
Sean Barker
Bristol
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx]
On Behalf Of Patrick Cassidy
Sent: 26 January 2010 05:52
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Context in a sentence
*** WARNING *** This message has originated outside your organisation, either from an external partner or the Global Internet. Keep this in mind if you answer this message.
David,
>> I want something--MT?
Ontology support?--that can read Fortran, Jovial, COBOL. Java, PHP, Ruby,
C, etc. (oops... that's a computer language) documents & make (more) sense
out of said documents. These are textual artifacts (therefore "documents"?)
which may or may not be written by humans, they're decidedly NOT edited for
readability, and they are really not intended for human consumption.
I believe that current ontology
technology, or extensions of it (to include procedural attachments) has
the technical capability to do such things. But non-trivial applications
will be quite labor-intensive to implement.
As I see it, ontology technology is
still in its infancy – or perhaps still embryonic. I have had
great difficulty finding any publicly inspectable (open source) applications
that go much beyond an advanced version of database information retrieval
– adding in a little logical inference, but not using that inference to
do anything conspicuously more impressive than RDB’s themselves.
CYC suggests it has built applications that do that, but we do not have them
available for public testing – and much of CYC is still proprietary, a
big turn-off for those who need a language that can be used freely.
John Sowa has told us that he uses a
combination of techniques to solve knotty problems efficiently. I
believe that is what will be very effective in general, but for that to work
outside the confines of a single group – i.e. to enable multiple
separately developed agents to cooperate in solving a problem- they will also
need a common language to accurately communicate information.
The problem, as I perceive it is that,
although up to now there has been great progress in understanding the science
(mathematical properties) of inference – for which we can be grateful to
the mathematicians and logicians - understanding inference only provides
a **grammar** and a minimal basic **semantics** for a language that
computers can understand. What we have very little agreement on is the
**vocabulary**, without which there is no useful language. For computers
to properly interpret each other’s data, it is necessary to have a common
vocabulary – or vocabularies that can be **accurately** translated.
Such a translation mechanism is possible if a common foundation ontology
were adopted, which would have representations of all the fundamental concepts
necessary to logically describe the domain concepts of the ontologies in
programs that need to communicate data. It is a measure of the
pre-scientific nature of the field that there is actually even disagreement
about the need for a common foundation ontology. To me it is blindingly
obvious – one cannot communicate without a common language (including
vocabulary); there are no exceptions. But most efforts at interoperability
among separately developed ontologies currently focus on developing mappings in
some automated manner – which any inspection immediately reveals cannot
be done with enough accuracy to allow machines to make mission-critical
decisions based on such inaccurate mappings. Accurate mappings are
possible via a common foundation ontology. But for reasons that I believe
are not based on relevant technical considerations, there is little enthusiasm
for developing such an ontology at present. Past efforts have failed,
because they depended on voluntary commitment of a great deal of time from
participants in order to find common ground among a large enough user
community. What will work is if a large developing community is **paid**
to build and test a common foundation ontology and demonstrate its capability
for broad general semantic interoperability. I am certain it will happen
sometime that such an ontology will be developed, because the need for it and
benefits of it are so compelling. The only question for me is how much
time and money will be wasted before such a widely used foundation ontology is
developed and tested in multiple applications – and who will pay for it.
So, I believe that current ontology
technology provides the basis to tackle the problems you cite, but I
don’t know of any off-the-shelf programs that can do that now.
Perhaps someone has developed one?
Pat
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
--
(•`'·.¸(`'·.¸(•)¸.·'´)¸.·'´•) .,.,
|
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (01)
|