Ed, (01)
I strongly believe in giving credit where credit is due. (02)
So I would cite Adolf Lindenbaum, who nearly 80 years ago observed
that all the theories that can be stated in a given logic naturally
fall into a generalization hierarchy. The usual name for such a
hierarchy is 'Lindenbaum lattice'. (03)
JFS> That was the point of my note on methodology, which emphasizes
> the distinctions as more fundamental than the particular choice
> of categories. (04)
EB> Now that we are all in violent agreement, I would point out
> that Amanda said it first: (05)
AV> The best way to develop an upper ontology may be to let it emerge.
> Pursue applied ontology projects as described above, and bring in
> the upper (and uppermiddle) as they are needed. Bring in higher
> level concepts as you need them to connect concepts across domains,
> to model the logical behavior and other characteristics of the
> domainspecific concepts, to avoid redundant and oddlyplaced
> assertions on domain concepts that can accurately be placed on
> shared concepts shared across the domains. (06)
Yes, indeed. Amanda has stated the point very nicely. (07)
But no matter who gets credit for it, I am delighted to hear that you
approve of that approach. To add a bit more detail to the proposal,
I am including excerpts from some emails that I sent to ontologforum
earlier this year. (They're not in chronological order because I
moved some of the more detailed theoretical discussions to the end.) (08)
John (09)
 Original Message 
Subject: An Ultra High Level Ontology
Date: 10 Feb 2009 (010)
My basic proposal, which I have been repeating in different ways
for many years, is extremely simple: (011)
1. Set up a registry for ontologies with minimal requirements
for contributions and some basic reviewing for competence. (012)
2. Emphasize that ontologies should be constructed from modules,
and multiple use and reuse of other modules in the registry
should be strongly encouraged. (013)
3. The sequence of uses and reuses would automatically create
a generalization hierarchy of ontologies  i.e., if ontology
X incorporates the module Y, then Y is a generalization of X. (014)
4. Any collection of modules that are frequently used and reused
would be high up in the generalization hierarchy, and they
would also be prime candidates for being "canonized" as
the recommended subset for further use and reuse. (015)
This is very simple. It doesn't require major funding to get
started. If a significant number of good modules are widely
used and reused, they would become a de facto standard  and
a prime candidate for someday becoming a de jure standard. (016)
 Original Message 
Subject: Your Ontology Summit discussion list
Date: 22 Jan 2009 (017)
CW> In the W3C, I used the OWLTime work http://www.w3.org/TR/owltime
> as an example of this, and its success has garnered some support for
> the idea of Core Ontologies. (018)
Times, dates, and the common ways of expressing them are essential for
a great majority of practical applications, and they are independent
of the debates about 3D vs 4D ontologies. (019)
Every branch of science, engineering, and business has standards bodies
that gather such data and develop a consensus on the names and basic
relationships. All of them should be in the registry/repository. (020)
The periodic table of the elements, for example, should be one of the
ontologies in the repository. It can be used by chemists, physicists,
pharmacists, etc., independent of any theory about the fundamental
nature of atomic particles. (021)
A registry/repository that contains a large collection of such
modules/theories/ontologies (or whatever anyone might call them)
would be immensely valuable. I strongly recommend that we *begin*
with assembling and organizing such modules. This might not be
as theoretically exciting as debating the upper level ontologies,
but it would provide a valuable service with immediate benefits. (022)
 Original Message 
Subject: standard ontology
Date: 11 Feb 2009 (023)
To illustrate that procedure, I'll pick some random names for
hypothetical ontologists. Let's suppose that somebody, Azamat,
contributes ontology A to the registry, and Ian contributes
ontology I. Then we have a hierarchy with the empty ontology E
at the top and with two branches from E to A and from E to I. (024)
Then suppose that another ontologist, Xavier, examines A and I,
likes some aspects of each, but finds other parts of each that
are incompatible. So Xavier extracts the axioms he wants from
A to form a new ontology AX, which is a generalization of A
(because the axioms of AX are a proper subset of A). Therefore,
the new ontology AX resides along a branch from E to AX to A.
Then Xavier extracts some axioms from I to form IX, which is
a generalization of I along the branch from E to IX to I. (025)
Finally, Xavier checks whether the conjunction of AX&IX is
consistent. To do that, he tests the axioms against his
favorite domain D to check whether all the axioms of AX&IX
are true of D. If so, AX&IX has at least one model and must
be consistent. Then Xavier registers all three ontologies
AX, IX, and AX&IX in the registry together with the metadata
about how they were constructed, tested, and used. (026)
Later, another ontologist, Yolanda, browses through the registry
and chooses AX&IX for her project. She decides to add more
axioms Y to form a new ontology AX&IX&Y, which she registers. (027)
Then Zachary decides that AX&IX is useful for his project, but
he needs to add some axioms that are inconsistent with Y. So
he forms an ontology AX&IX&Z, which he puts in the registry. (028)
Another ontologist, Winnie, studies the additions Y and Z
and discovers that they have a useful common generalization,
which she calls W. So Winnie registers AX&IX&W as a new
specialization of AX&IX and a common generalization of
AX&IX&Y and AX&IX&Z. (029)
The method of registration and revision has proved to be very
effective for the opensource software community. The major
addition for ontologies is the requirement that the metadata
explicitly show the complete path of generalizations and
specializations that were made to derive the ontologies. (030)
The generalization hierarchy of ontologies is nothing more
nor less than a record of all the derivations made during
the development stages *PLUS* any additional observations
(such as Winnie's discovery of a common generalization). (031)
 Original Message 
Subject: Lattice of theories
Date: 14 Jan 2009 (032)
The lattice is a purely theoretical structure that embodies
all possible generalization and specialization relations among
theories. Every implementation of any special case is an
implementation of that theory. (033)
> I am not convinced that the lattice of all possible theories
> is the most efficient solution to the problem. (034)
That is like saying that you don't like integers because there
are infinitely many of them and some functions over the integers
are difficult to compute. (035)
The fact that the theory of lattices or the theory of integers
embodies a very wide range of useful relationships is good.
The fact that some relations may be hard to compute is not an
argument against the theory. If you don't need them, you don't
have to compute them. If you do need them, the lattice is not
a hindrance, and it can be a help. (036)
> I am concerned, for example, about the relation of the Cyc and
> SUMO and BFO and DOLCE. I don't think that any one of the
> relations applies to any two of those, as whole theories. (037)
Of course it applies. It says that they are cousins, not parents
or children of one another. But the lattice also shows how to find
common generalizations: (038)
1. If you can find any subset of axioms that is common to all three
of them, it is automatically the axiomatization of a theory that
is a common generalization (or "core") of all three. (039)
2. If you can find axioms common to two out of the three, it defines
a common generalization of those two. (040)
3. If you can't find any common axioms (or can't find all you'd
like to find), you might find some set of simpler axioms that
imply different axioms in each of the three. That set of
simpler axioms is also a common generalization. (041)
4. Any core you propose is guaranteed to be a common generalization
of any theory derived by adding axioms to that core. (042)
This is an illustration of how the theory shows you how to think about
the problem. Any core you propose is going to belong to cases #1, #2,
#3, #4 above or some variation of them. Some of the common axioms may
be easier to find than others, but the methods of testing them to see
whether they are indeed common are all based on the relationships
embodied in the lattice. (043)
All of those techniques plus many others are implementations of that
theory. Some of them may be easier to implement than others. But
the lattice displays all the possible relationships among theories.
Any implementation of any subset of those relationships counts as
an implementation of the lattice. (044)
If you ignore the lattice, that is like playing with integers
without any theory about how they are related to one another. (045)
 Original Message 
Subject: Lattice of theories
Date: 17 Jan 2009 (046)
See Figure 4 of the following paper: (047)
http://www.jfsowa.com/pubs/dynonto.htm
A Dynamic Theory of Ontology (048)
That diagram shows an excerpt from the lattice of theories with four
operators for moving around the lattice. The first three operators
are the AGM operators for belief revision and the fourth supports
metaphors: (049)
1. Contraction: Delete axioms to move to a more general theory. (050)
2. Expansion: Add axioms to move to a more specialized theory. (051)
3. Revision: Perform contraction followed by expansion to move
to a theory that is a sibling or a cousin of the previous one. (052)
4. Analogy: Relabel one or more names of types, relations, or
individuals to form an isomorphic theory in some other branch
of the lattice. (Mathematicians usually consider isomorphic
theories to be identical, but for engineering applications
it's important to distinguish theories about different, but
similar physical systems.) (053)
For a finite repository, I suggest the term 'hierarchy of theories'
for any subset that may be documented in the repository. Whether
and how the files that store those theories are linked, combined,
or moved is an implementation issue. It's important for practical
purposes, but it doesn't affect the theoretical issues. (054)
AH> The ontologies can be equivalent, consistent, inconsistent,
> one contained in another, or disjoint. (Some of the preceding
> interact with one another... i.e. disjoint ontologies are
> trivially consistent). These are what i mean by links. So you
> can say that O1 is consistent with O2. (055)
All those operations and the interactions among them are very clear
in terms of the lattice of theories: (056)
1. Two theories are equivalent if they correspond to the same node
in the lattice. The conditions for equivalence can be stated
in prooftheoretic terms (same deductive closure) or in model
theoretic terms (true in exactly the same models). For Common
Logic the prooftheoretic and modeltheoretic criteria determine
exactly the same node in the lattice. (057)
2. Two theories are inconsistent if their only common specialization
is the absurd theory at the bottom of the lattice (i.e., the
theory in which everything is provable  its deductive closure
is every syntactically wellformed statement in the given logic). (058)
3. Two theories are consistent if they have a common specialization
that is not the bottom of the lattice. (059)
4. I'm not sure what you mean by 'disjoint', but the simplest
criterion for two theories to be "trivially consistent" is
to have no common names of types, relations, or individuals.
No axiom of one could contradict any axiom or conjunction of
axioms in the other. Therefore, their common specialization
could be determined by taking the union of the axioms of each. (060)
Whether you implement these operations by manipulating pointers
(URIs) or by physically moving or copying the axioms is irrelevant
to the formalism or its implications. (061)
AH> They [3D and 4D theories] are so similar in fact that it is
> hard for many people to even perceive the differences between
> them. Intuitively they are indistinguishable. But therein lies
> the problem: because our reasoning engines have no intuition,
> and formally they (the theories) are incompatible. Intuitive
> similarity means very little when it comes to ontologies. (062)
Again, the lattice of all possible theories provides a clear
and precise way of analyzing the theories and determining how
to represent the similarities. You can start with physical
observations of times and positions, which are recorded in
the same way in both. Since all those observation statements
are the same for both, they would all be present in any common
generalization. Whether any other generalizations would be
common to both would depend on the details of the axioms of
the 3D and 4D systems. (063)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontologforum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontologforum/
Unsubscribe: mailto:ontologforumleave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgibin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontologforum@xxxxxxxxxxxxxxxx (064)
