I will reply to one point that Ali Hashemi made that illustrates
a profound misunderstanding, and then let him respond (hopefully) to the
[AH] > The problem that you can't get around, with or
without an FO is generating these axiom level mappings between ontologies. My
argument thus far is that people realize that it is the generating of these
mappings that are more important than agreeing on possibly vacuous, super
Yea gods! That is precisely the problem that the FO
addresses. I am sure I have said on a number of occasions that one of the
important functions of the FO is precisely to have sufficient primitives and relations
so that all other ontologies, including the various “Upper Ontologies”
that have already been developed can be **logically** specified in terms of the
FO elements, so that any information in any one of these can be translated into
the form use by the others. I have emphasized over and over that an FO
will have carefully specified meanings, and I can’t begin to imagine how
anyone could get the impression that these are just labels. Look again at
several posts form the thread – perhaps the terminology in a clause of
one sentence might be ambiguous, but not the whole discussion.
[AH] . Let's assume you develop a single FO. By virtue of
its generality, the upper concepts will likely only consist of labels,
A double misunderstanding here: (1) there is a great
deal more than labels, there would be axioms and even procedural code
sufficient to precisely specify the meanings; (2) no one group can develop an
FO – its purpose is interoperability, and it must therefore be developed
by a large number of independent groups who can verify its suitability for both
their own local purposes and for accurate exchange of information among them. The
COSMO is not proposed as or likely to be adopted as an FO, it is a test
ontology that I will be using to demonstrate some of the basic principles of an
[AH] The point is simply that people realize that it is
useful to have interlingua ontologies, so instead of direct mapping, they go
through a referent ontology. The only difference is that there need not be a
single interlingua. You could have one based on DOLCE, another on SUMO and
another on CYC, another on PSL or what have you. If you specify the mappings
b/w each of these as well, then you have in effect global referent ontologies.
It's really about semantic mappings more than anything...
Yes, and the most effective mechanism to “map”
between DOLCE, SUMO, and CYC, etc. (and the domain ontologies dependent on
them) would be to have another FO that has enough of the primitive
conceptual elements to do the translations among them. Mappings might be
made 1 to 1 among these ontologies, but if new basic ontologies are developed,
the number of required mappings would be N^2, and each mapping would require an
effort comparable to developing a new ontology – maybe more. Easier
to have a common reference ontology (the FO) and map each such ontology to
that, once. You seem to recognize the virtue of having a common “interlingua”
but for some reason don’t recognize that the FO is just the most general
case, useful for a wider range of topics than those other “interlingua”
But the FO project I have suggested is aimed not just at
producing an “interlingua” ontology, but **demonstrating** with
open-source public programs how it can be used and how it can enable
interoperability among independently developed applications or databases.
Can you reference any examples from existing projects where we can see how useful
domain ontologies or applications or databases have been able to interoperate in
a practical (non-toy) application?
[AH] > This doesn't require $30 million. It doesn't
require consensus among disparate fields or people. The first part can likely
be done within a year, the mapping part, perhaps a few more.
I seem to recall ontology-ontology mapping work (without an
FO or equivalent) being done for over 10 years now. What do you consider
as impressive results suggesting that this tactic will pay off?
Oh, and meanwhile, in those 10 years US industry has wasted
over 1 trillion dollars for lack of semantic interoperability. That’s
a cost too, even if no one agency has to pay it. I think that the problem
is sufficiently urgent that every plausible tactic should be tried.
And finally :
[AH] > 4) Instead of trying to develop a single FO,
resources would be better spent on focusing the mapping work.
The FO project **is** mapping work, but more general and
therefore more useful than any more narrowly focused mapping. And I can’t
imagine why both approaches couldn’t be tried at the same time.
There is enough money, it is just a matter of recognizing the importance of the