>....this is not strictly a personal
>exchange, and should not be, rather you and I are playing roles in
>this debate. The virtual amphitheater. Our statements are bold, and
>call for bold replies. (01)
Ok, as you wish. I know some folk are reluctant
to engage in such debates in, er, public; but I
am game if you are. And I agree that we are both
playing roles here, and the discussion may be
useful. (02)
>I have actually seen your profile the first day that you sent a reply
>to one of my posts, and I have seen that you are senior researcher
>with qualifications and experience in maths and AI (03)
I wouldn't claim qualifications in maths: I have
a first degree and a kind of lingering respect
for mathematics, but am not really professionally
qualified. But OK, I represent the math-oriented
side of many possible debates, let us agree. I'm
what Roger Schank famously called a Neat as
opposed to a Scruffy. (04)
>- both of which I
>have limited knowledge about - When I said 'i dont know who Pat is -
>I was referring to the fact that I do not see how maths and AI
>expertise qualify someone to rule on everything that there is the
>world of knowledge today, (05)
I entirely agree. But who regards what I say or
write as a 'ruling'? You pointed the list at your
preliminary draft of points, with warnings about
its being preliminary, etc., and (as I understood
your email) asked for comments. I commented,
viewing the matter of course from my perspective
and background (how could I do otherwise?). You
disagreed. Fine, let us have a debate about the
substantive points, then. But it seems
counterproductive to make a disagreement turn
into a kind of meta-debate about who is entitled
to disagree, or what our various qualifications
are for disagreeing or not. (06)
>.It may sound like a joke but it's true, I
>dont think that the fact that you are a clever and nice guy - as you
>certainly are - means that you understand everything about ontology
>and semantics. (07)
Well, not *everything*. I hope I didn't ever say that. (08)
But more usefully: there may be a disconnect
lurking here about what "ontology" and
"semantics" mean. Indeed, I do presume that the
word "ontology" as used in these discussions does
in fact have a reasonably precise meaning: it
identifies a field (which, like all research
fields is constantly changing and adapting, of
course: its a moving referential target, as it
were) which is concerned with the representation
of conceptual knowledge in ways that make it
amenable to machine processing of some
'nontrivial' kind (excluding, therefore, such
ontology-challenged mechanical processes as
simply sorting or cataloging databases, for
example.) This is a reasonably well-defined
technical field, to repeat; it is not a brand-new
ambition, or a distant goal. And like all such
fields, it does have a certain degree of
associated professional expertise and background
methodological assumptions associated with it,
and with which its practitioners are expected to
be acquainted; and these do in fact include such
topics as formal semantics (that is, 'logical'
semantic theories such as model theory), issues
arising in mechanical reasoning, and parts of
philosophical logic and philosophy of language
(and others, of course, such as software design).
Not everyone is equally qualified or competent in
all these areas, which is why we tend to need
working groups, but I think there is a general
mutual recognition that these fields are all
relevant to ontology design and engineering. (In
fact, many people in the field have expertise
crossing these boundaries. Bill Andersen is a
philosopher and ex-Intelligence operative who
started a successful software ontology company;
Chris Menzel is a professor of philosophy and
computer science, who is an expert on philosophy
of logic and language and a Unix maven, and there
are many other examples.) (09)
Now, one of the wonderful things about working in
a cutting-edge area like this is that one finds
people from completely different backgrounds who
bring a completely fresh point of view. Being on
a few W3C working groups has made me have to
learn new stuff faster than at any time since I
was in graduate school, which is refreshing at my
age. So by all means, if you feel that you have a
new perspective on all this stuff from an
entirely different point of view, and especially
a different *disciplinary* point of view (social
science? ethnomethodology? some part of
linguistics?) then please don't take my comments
or their tone as anything other than an opening
move in a cross-disciplinary bout of (hopefully)
mutual education. (010)
>I would actually quite argue the contrary, that set
>preconceived knowledge in one particular
>area of science sometimes restricts our perspective. (thats what I
>mean by narrow) (011)
Oh, no doubt about that, of course. But the other
side of that coin is that a detailed acquaintance
with the state of some technical art can
sometimes give one a more realistic sense of what
is technically achievable and what is not, if
that field is central to the goal in question. (012)
>I am definitely learning things on this list, although it is proving a
>very intensive task and I may have to switch to digest mode. (013)
I know the feeling :-) (014)
> I just
>fear that people who have so much knowledge in one field , and rely
>heavily on it and base the rest of their reasoning solely on their
>perspective inevitably find it impossible to see reality detached from
>their opinions. (015)
I think this applies to us all, regrettable
though it may be. (I take it that, like the rest
of us, you also do not claim to have a direct
route to Reality.) (016)
>But reality rearely corresponds to an opinion.
>
>Let me get back to (some of) your points below, with additional
>references - I will then hurry back to my deadlines
>
>first - my use of the word 'ontology'
>
>I dont consider OWL as the archetype of ontology. ontology is not just OWL (017)
I couldn't agree more. (In fact, many on this
list will be chortling at the idea of my treating
OWL as an archetype.) I used OWL in my response
simply as a convenient example of current state
of the ontology art, as it were. (018)
>I refere to ontology as conceptual frameworks, Gruber's first definition. (019)
Gruber was (and still is) talking about
formalized conceptual frameworks, and he was at
the time talking specifically about such
frameworks in the context of knowledge
representation in AI (though the scope has since
broadened, the technology and theory used in the
field has all been inherited from AI/KR work.) (020)
>From http://www-ksl.stanford.edu/kst/what-is-an-ontology.html (emphasis added) (021)
"A body of FORMALLY REPRESENTED knowledge is
based on a conceptualization: the objects,
concepts, and other entities that are assumed to
exist in some area of interest and the
relationships that hold among them (Genesereth &
Nilsson, 1987) . A conceptualization is an
abstract, simplified view of the world that we
wish to represent for some purpose. Every
knowledge base, knowledge-based system, or
knowledge-level agent is committed to some
conceptualization, explicitly or implicitly. (022)
An ontology is an EXPLICIT SPECIFICATION of a
conceptualization. ... For AI systems, what
"exists" is that which can be represented. When
the knowledge of a domain is represented in a
declarative formalism, the set of objects that
can be represented is called the universe of
discourse. This set of objects, and the
describable relationships among them, are
reflected in the representational vocabulary with
which a knowledge-based program represents
knowledge. Thus, in the context of AI, we can
describe the ontology of a program by defining a
set of representational terms. In such an
ontology, definitions associate the names of
entities in the universe of discourse (e.g.,
classes, relations, functions, or other objects)
with human-readable text describing what the
names mean, AND FORMAL AXIOMS that constrain the
interpretation and well-formed use of these
terms. Formally, AN ONTOLOGY IS THE STATEMENT OF
A LOGICAL THEORY." (023)
>An ontology is first formed at conceptual level (design) being
>inmplemented is the last step. (024)
What do you mean by "implemented"? A
formalization (eg, in OWL, or IKL if you prefer a
more exotic notation) is not an implementation of
anything in the usual sense. Ontologies are not
software! (BTW, I think that this is not widely
recognized enough. Thinking of ontologies as
software is one of the recurrent motivations for
applying software engineering principles to
ontology design, which may be a mistake. Or not,
of course.) (025)
>So i refer to ontology as 'conceptual
>and semantic references (set of terms, not just words) (026)
Clearly, any ontology does define a set of terms,
yes. But a set of terms alone does not constitute
an ontology, except in a very trivial sense.
[Later: I see that with your nonstandard usage of
"term", in fact it does.] (027)
BTW - this might be central - what do you mean here by a "semantic reference"? (028)
>PH. Overall, as stated, it is an odd
>> combination of asking for technology which is
>> already routinely deployed, and asking for the
>> impossible.
>
>routinely deployed does not mean 'good' satisfactory, nor effective. (029)
Indeed not. But my point was not that these
aspects were desirable: that was your decision,
since you listed them as requirements. It was
that they already exist in widely used present
technology, so these requirements are already
satisfied. [But, apparently I misunderstood some
of what you meant, see later.] (030)
>Impossible? Is this the same 'impossible' that people said when people
>wanted to cross the sky, or go to the moon, or develop the internet? (031)
No. It is a more informed 'impossible' that
people say when they work closely with people
whose lifetime research ambition is to achieve
this goal, and have a keen sense of how hard it
is, and how far beyond the present state of the
art; and have taken part of DARPA-funded research
competitive tests in which the best natural
language question-answering systems have failed
miserably when tested against other kinds of
query interfaces (such as GUIs based on
conceptual maps). Now, of course, many of the
best experts at the time thought that flight was
impossible, too, and maybe the entire AI/NL field
is stuck in a mental box. But there is a
recurring history here of optimism being replaced
again and again by a more cautious assessment of
progress, so I want to see something more
positive than optimistic rhetoric, before
agreeing to accept success in this area as a
*requirement*. (032)
>I
>would be very careful in defining 'possible' and 'impossible' these
>days (033)
Perhaps I should clarify that I did not mean
impossible in principle: only well beyond the
state of the technological art, so best not
considered a requirement. (034)
> > 1. I really don't think it makes sense to ask for
>>
>> "a ... set of agreed terms... that embodies and
>> represents and synthesizes all available, valid
>> knowledge that is deemed to pertain to a given
>> domain"
>>
>> It is the 'all's here that make this impossible.
>
>for you, Pat. For me, it is possible . I say ' all available and valid
>knowledge'
>I dont have a problem compiling such directories. (035)
They I guess I will just wish you luck, and let
us talk again when you have succeeded. I know you
are concerned with disaster relief. I would have
guessed that might be an area where to claim to
have ALL relevant knowledge might be a particular
overstatement. But I bow to your expertise in
this matter. (036)
> > One can never get ALL the available, valid
>> knowledge about anything. One can only hope to
>> get a workable amount, and attempt to keep it
>> unpolluted by falsehood and reasonably up to date
>> and so forth.
>
>I agree we may have restrict the set, but I insist that we have the
>capabilities today to be ambitious and the only limit to such ambition
>is the limit of our vision. How far can you see Pat?
>okay 'all available'= 'all available knowledge that can be put
>together within given constraints' (037)
Well, that rather begs the question. Of course we
can get all that we can get. Yes, let us agree on
that. (038)
> >
>> Second, though, what does it mean to say that a
>> set of terms - what I would call a vocabulary -
>> can "embody" knowledge?
>Not just vocabulary - terms as 'conditions' , things which are 'true' (039)
Terms alone do not assert or claim anything about
the world: they are just linguistic labels which
refer to things. To assert anything, you have to
say something ABOUT the referents of the terms.
[See ** below] (040)
>Again, you seem to infer that your interpretation of a meaning is the
>only one. Your interpetation is correct, but there are wider
>interpretations out there that I beg you to consider before you come
>to
>your conclusions. A term is many things. look it up. (041)
[**] I did look it up on Wordnet (as I presume
your URI pointer was supposed to indicate) and I
find the main meaning given there to be (042)
* S: (n) term (a word or expression used for
some particular thing) "he learned many medical
terms"
Verb (043)
* S: (v) term (name formally or designate with a term)
"
Which is the sense I use it in, where it is linked in meaning to "terminology". (044)
I see (reading ahead) that you are using the
third sense, as in "terms of the treaty", where
it means roughly the propositions contained in
the treaty (or ontology). That is not
conventional usage in the field, so you run the
risk (as we are here illustrating) of being
seriously misunderstood; but OK, yes of course if
you include the entire apparatus of the ontology
under the phrase "terms of the ontology" then
defining the terms indeed amounts to creating the
ontology. (045)
>Terms are just, well,
>> terms. The knowledge is represented by larger
>> structures - axioms, sentences, diagrams, texts,
>> ontologies, topic maps, whatever - which
>> themselves contain and use the terms and, in the
>> final analysis, give the terms meaning.
>
>They are all simply 'terms' as agreed convention. A term is an agreed
>convention Pat
>what you name above are all terms. so maybe i need to clarify what is
>'terms of an ontology' are not just the words, but all the knowledge
>representation artifacts used
>to define the ontology (046)
Thanks for the clarification, indeed it helps a
lot. I think I will not be the only one who
misunderstood you here. I'd suggest using a less
misleading terminology if you possibly can. (047)
> >
>> 2. Just an aside, but this sentence seems to
>> indicate a misunderstanding about how ontologies
>> are actually built these days:
>
>Please note, that ontologies these days are not built very well (048)
Possibly; but the point you were making implied a
particular social/methodological
division/distinction which I think is not
actually very strong or germane. There may well
be others, of course, which are. (049)
> > "Among the barrier to adoption for Ontology,
>> current research identifies not only different
>> linguistic, conceptual and cultural differences,
>> but also knowledge and point of view differences
>> that set apart academics who generally develop
>> ontologies and related tools and methodologies
>> from experts who understand lingo and the
>> dynamics - system developers programmers,
>> systems designers and end users at large."
>
>I am referring specifically to some realities in current environments
>- distributed knowledge
>- distributed organisations
>- collective decision making
>- collective knowledge building (050)
"Collective and distributed" is one matter. The
existence of a particular social barrier between
"academics" and "experts" is another claim. I do
not dispute the importance of providing
distributed means for collective ontology
creation. I do however resist (and have seen no
evidence supporting) a claim that there is an
important [academic]/[implementors+users]
cultural 'gap' that needs to be bridged. (051)
>see here
>
>ntology-Driven Intelligent Decision Support of OOTW Operations,
>Alexander Smirnov, Michael Pashkin, Nikolai Chilov,
>
>Tatiana Levashova, St.Petersburg Institute for Informatics and
>Automation of the Russian Academy of Sciences,
>
>ieeexplore.ieee.org/iel5/9771/30814/01427137.pdf (052)
Unfortunately I do not have access to
IEEEexplore. Is there an openly readable copy, do
you know? (053)
>Ontology Engineering: A Reality Check. Elena Paslaru Bontas Simperl.
>1. and Christoph Tempich. 2. 1. Free University
>
>of Berlin, Takustr. ...ontocom.ag-nbi.de/docs/odbase2006.pdf (054)
An excellent paper, I agree, which everyone should read. (055)
...
>
> > 3. You want GPL or public licencing. But semantic
>> web ontologies are just like Web pages: they are
>> open to all. You can copy them using HTTP. Why do
>> you think that licencing is even an issue on the
>> Web?
>
>This was not one of my original requirements, and someone added it in,
>I think under GPL one can transform things, but the original version
>retains original attribution
>so it is useful to track the evolution of the thing (056)
Yes, and I stand corrected on this point. Others
have pointed out to me that free access (as on
the Web) does not automatically give for example
rights to copy, for example. So open licencing is
indeed an issue. (057)
>
>>
>> 4. Ontologies should "declare what high-level
> > knowledge it references". Again, this is a
>> non-issue. By design, OWL ontologies may
>> reference ("import") other ontologies, and these
>> references are part of the ontology, by
>> definition. So yes, of course they "declare" in
>> this way. Do you have some other mechanism in
>> mind? (The "named graph" proposal allows
>> ontologies to make explicit assertions about
>> other ontologies, such as agreeing with it,
>> disagreeing, basing itself on it, warranting the
> > truth of it, etc.. ; is this what you have in
>> mind?)
>
>i am not thinking upper ontology here, but domain ontology (058)
I also was referring to domain rather than upper. (059)
>
>No,for example
>in defining an open ontology for Emergency Response, (purpose: build
>os software to be used in emergency)
> it is necessary to make statements as to what things are (what
>entities are being referenced, for example) and what words are used to
>described them
>(controlled vocabulary). We have some experts that have a lot of
>knowledge, who ahve worked for red cross for
>
>example, for many years, who tell us that such and such are the
>entities and such and such the terminologies to be used. 1. other
>experts with other backgrounds disagree 2.if an expert references 'A'
>should also declare the source of his knowledge that is being
>referenced. (060)
OK, I see you were using 'source of knowledge'
far more broadly. OK, I will agree then with your
requirement. But we must recognize that many
ontologies have no such 'source' in this sense,
or may represent a distillation from many such
sources. I look forward to a future in which
ontologies themselves are considered to be
definitive sources of knowledge, so that your
'broad' view and my 'narrow' view may become
closer in scope. (061)
>
>> 5. It should "declare what kind of
>> reasoning/inference supports/it is based on" .
>> Again, a non-issue. This is like asking that a
>> bridge should have a label on it saying what kind
>> of bridge it is. Of *course* any ontology will be
>> written in some language which supports some
>> kinds of inference. That is why such language
>> specifications include a detailed semantics.
>
>no. I mean, for example:
>in diagnosing a patient's disease, which reasoning do I follow?
>a) allopatic doctor follows one reasoning, and gives treatment a
>b) allopatic doctor 2 follows another reasoning and gives treatment b
>b) homeopatic doctor follows homeopatic reasoning and gives totally
>different treatment
>c) chinese doctor has another view of the illness altogether (wind,
>air excess) therefore his treatment is totally different (062)
OK, I think I see what you mean, kinda, more or
less. We were at cross purposes. But I don't
think you are describing it properly (or at any
rate, your usage here is likely to be
misunderstood by more people than just me.) It
isn't at all clear that these various doctors are
REASONING differently, in the sense of using
different logical principles. In fact it is
likely they are not. They are however reasoning
from different sets of assumptions, and reasoning
about different concepts and using different
views of what is important or salient. In other
words, they are using different ontologies :-) (063)
>
>unfortunately, often we are imposes as 'given' that a) is the
>reasoning, to the point that the reasoning is not even
>
>declared (064)
Im really not sure what would count as "declaring
reasoning". Take your example of the various
doctors, and suppose each of them writes an
ontology. What would you accept as a reasonable
declaration of the reasoning in each case? Not
the ontology itself, but the reasoning associated
with (what? Behind? Giving rise to? to be used in
the context of?) the ontology. Just in broad
outline, to give an idea of what you mean. (065)
>
>> Given this, then, what this amounts to is that
>> the ontology should identify what language it is
>> written in. Which is a good idea, but again a
>> solved problem, so a non-issue, at least if it is
>> written using XML; since the XML spec provides
>> for just such declarations using the XML header.
>
>not just language, but what 'theory', philosophy is behind the reasoning
>. see the doctors example above
>they are all doctors, yet they all prescribe
>different medicines and treatments
>is this the scientific method? then declare what scientific method
>you are referring to (066)
Declare how? What would one say? Would a
reference to a cultural tradition do? Or are you
asking for a formalized logic to be used on the
ontology (as I was presuming)? Or something in
between? What? (067)
What is the purpose of this declaration? Will it
influence how the ontology is to be processed by
machines? (I presume not.) Knowing the purpose
might help answer the above questions. (068)
> > 6. It should "support queries via natural
>> language as well as machine language" Whoah
>> there. Supporting queries in natural language is,
>> at the present time, close to science fiction.
>
>I am not doing present, I am doing future here. (069)
So am I, but I am considering near or forseeable future. (070)
>I already try this everyday on google
>'who is pat hayes' is natural language. (071)
Not as understood by Google, its not. "Who is"
simply tells Google to use a specialized set of
criteria in its next retrieval. And what you get
back certainly isn't natural language. (072)
>i agree that the result I get is not as good as it should yet, but
>allow me to be ambitious
>we should be able to improve on that if we designed ontologies that
>can be queried
>by search interfaces (073)
Well, allow me in turn to ask that your ambition
is based on some acquaintance with the state of
this particular art. People have been trying to
do this since the beginning of AI, and more
recently under huge commercial pressure and with
large resources, with very limited success. (074)
>
>At
>> best it is a research ambition which is at the
>> cutting edge of AI research. And in practice, it
>> doesn't work very well (ask CyCorp about their
>> experiences.) It is, in any case, well beyond
>> what it is reasonable to ask of any kind of
>> standardized protocols. This is way too ambitious.
>
>no - I insist Pat - I can is it not far (075)
And I insist that it is. My insistence is based
on a fairly close contact with people actively
working in the field. What is yours based on? (076)
> >
>> (By the way, what exactly do you mean by "machine
>> language" here? Do you mean formal language?
>> Humans can learn to use formal notations.)
>
>I mean that can be parsed by a computer (077)
OK, thanks. (078)
> >
>> 7. "It should be 'easy to understand' by generic
>> users without specialized skills" Again, way too
>> ambitious. I'm not sure it even makes sense. If
>> you can't read or understand L, you won't be able
>> to read a text written in L. This seems obvious
>> whether L is English, Spanish or OWL. Is having a
>> grasp of Spanish a 'specialized skill'?
>> Personally I find OWL easier than, say, Russian.
>
>
>yes, sure Pat
>but again. ask a russian undergraduate student, and he will give you a
>different view
>wont't he? (079)
But he might find OWL easier than English. My
point is merely that NL is not automatically more
humanly usable than what are often called
'formal' notations. (080)
>p
>
>I mean that my developers are brilliant, and may be skilled at php,
>but not at emergency nor at whatever other domain. a php developer
>writing a software today should be able to reference existing
>knowledge (what rex has picked up on in separate thread) by using an
>open ontology without being neither a domain expert, nor trained in
>protege or other ontology editor. (081)
Well I will agree with that 'should', and I
didn't mean to imply that any of the current
interfaces are ideal. We have been working to
improve this situation ourselves. But I think
there is an important methodological point here.
If one looks at a system consisting of a human or
humans interacting with 'formal' editor/composer
software, the human is by far the more adaptable
component in the system. It is often good
engineering to require the humans to do some
limited adaptation to the machine, if that can be
done at relatively low cost, than to think of the
'naive human user' as a fixed target to which the
machine must, at any cost, be made to conform. So
just a bit of training to use an ontology editor
might be acceptable, if the resulting combination
of user+editor is more effective than forcing
users to interview a near-dyslexic NL interface
to find out what it 'knows'. (082)
> >
>> But the central point is that an ontology, by its
>> very nature, is ultimately a text written in some
>> language; and so to understand it, you have to
>> know that language. (And to ward off possible
>> misunderstanding, I'm here using 'language'
>> broadly to include, eg, map-making and
>> diagrammatic conventions; so that for example
>> circuit diagrams or flowcharts or social networks
>> displayed as graphs are all kinds of language.
>> The basic point still applies.)
>
>i think diagrams and vocabularies should be sufficient - I think we
>should be able to simplify
>our standards so that they are accessible, see some of the papers
>reference in the list above (not just my idea) (083)
Yes, I agree, though Im not sure they need
simplification so much as re-description. In fact
I think OWL, the current standard, can be made
pretty easy for lay users to understand. Most of
the trip-up issues have to do with making clear
the limitations of the notation rather than the
basic concepts, and many limitations can be
imposed (and hidden) by good interface design. (084)
> >
>> So trying to draw a contrast between 'generic
>> users' and 'academics' or whatever isn't helpful,
>> seems to me. What might be more use is to ask,
>> how long does it take to learn the relevant
>> language? Can we find ways of displaying
>> ontological content to make it easier to learn?
>also that for sure
>> (We have been trying to do this in the COE system
>> for OWL, for example, and VivoMind are focusing
>> on CLSE 'structured English'. But you still have
>> to learn to use COE - it takes about a day - and
>> its a lot easier to read CLSE than to write it.)
>>
>>
>> 9. "It should be implementation independent;
>> this means not only usable by OWL/DAML model but
>> also reusable by alternative ontology languages"
>>
>> What does this even mean?
>it means that I am not talking about 'domain knowledge' being expressed in
>OWL - I can see that in your world ontology=owl (085)
You couldn't be more wrong, believe me. But in
the actual world, this is more true than I wish
it were. (086)
But my point is that any particular ontology is
going to be represented in SOMETHING: it might be
OWL or CL or GOFOL or Prolog or RDF or Concept
Graphs or CGIF or who knows what. But it can't be
represented in nothing, and it can't be
represented in some kind of supervening
Übernotation, because there is no such universal
notation. So what does it mean to require that
ontologies be "implementation independent" (by
which I presume you mean, (formal) notation
independent)? (087)
>
>in my world ontology=knowledge representation (owl independent)
>
>I am advocating freedom from OWL Pat, (088)
Oh, so am I! I *detest* OWL. I think adopting OWL
as a Web standard was a huge mistake.
Nevertheless, here it is, being used, and we all
have to get used to it. But not being OWL is one
thing, and being "independent" of all formal
notations is quite another. (089)
BTW, if I may blow a different trumpet for a
second, the best candidate so far for a single
overarching KR notation is I think the IKL
language Chris Menzel and I recently developed
(closely modelled on KIF). An ontology written in
IKL can for example describe concepts defined in
most other formal languages and in other
ontologies, referring to those languages and
ontologies by name, and relate their meanings to
one another in ways that are sensitive to the
local context, if required. But IKL in its
current incarnation is probably too 'logicky' for
widespread use, and there are not as yet any
complete formal reasoners for IKL. (090)
> >
>> 10."it should support one view of the world if
>> required, and allow for simultaneous multiple
>> views, meaning that it should aim to be perfectly
>> elastic, flexible and adaptable,"
>> I'm not sure what this means, but it sounds
>> either trivial or impossible.
>
>
>Pat, an ontology is simply a view of the world. (091)
A formalized view of the world, or part of it. Yes. (092)
>I am saying that
>models of reality are more
>useful and more faithful to the real world when they model more than one view (093)
That sounds like multiple ontologies. I agree
they can be important. But they can also be a
damn nuisance. How many different views of time
do we need? How many different ontologies of
geographical space? (094)
However I note you speak of a single model
modelling more than one view, rather than
multiple models. Well, again I will express
limited agreement but with cautionary notes.
Sometimes the multiplicity can be harmful or
obstructive. Sometimes it is better to use a
particular view for a particular purpose. And
there is a huge caution to insist on here with
regard to including multiple 'views' in a single
formalization. It is very easy to get confusions
and inconsistencies unless great care is taken to
keep divergent concepts separated from one
another. You might find the literature of
contextual reasoning and the use of
'microtheories' in Cyc illuminating in this
regard. (095)
>Impossible? I would reconsider that statement soon Pat... (096)
Check more carefully what I said was impossible.
Of course we can have multiple views of things.
We have that now, and it is a major source of
problems. (097)
>thanks again for giving me the opportunity to clarify further where I
>am coming from (098)
Yes, the clarifications were extremely helpful. I
hope I have managed to make my points clearer
also. (099)
Pat (0100)
--
---------------------------------------------------------------------
IHMC (850)434 8903 or (650)494 3973 home
40 South Alcaniz St. (850)202 4416 office
Pensacola (850)202 4440 fax
FL 32502 (850)291 0667 cell
phayesAT-SIGNihmc.us http://www.ihmc.us/users/phayes (0101)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (0102)
|