Hi Ed, (01)
Sounds like we agree (well, you make sense to me, even if I don't to you),
it's just my understanding of "ontology" and "semantics" differs from yours.
My "ontology" is my model of the world, and I don't care how it's
represented so long as I know how the symbols in the representation map onto
the things in my ontology (those are things whose extent is identified). It
is essential that the representation is computer interpretable, as
interoperability is the goal of IDEAS - still doesn't rule out barcoded body
parts, you'll note. (02)
I still can't see RDFS and RDF as anything other than syntax (sorry, we
might have to disagree on this). However, as long as I document how that
syntax maps to the concepts in the ontology, it's OK with me. (03)
Also, you wrote: (04)
"I think Barry's point is that your IDEAS RDF language is so massively
extended and possibly so weakly defined with respect to interpretation
semantics that no one, including you, has any idea what kind of reasoning
engine could actually process it in a non-trivial way." (05)
I don't think Barry Smith has looked at IDEAS. My point was about his
critique of ISO15926 (http://ontology.buffalo.edu/bfo/west.pdf). If you can
use something as arcane as EXPRESS (and Part 21/28) to model your ontology,
then bending RDFS to represent an extensional ontology really ought to be
kosher ! I agree though, that you have to be crystal clear about how the
RDFS elements map to the ontology. (06)
Another clarification is probably needed. I only mentioned reasoning because
projecting to a 1st order representation is one useful work-around for
making a higher order ontology tractable. The world is higher order, so we
decided to make the IDEAS ontology higher order. If the level five wizards
want to roll their 36-sided dice of inference, then I see no reason (groan)
to spoil their fun. I'm even prepared to believe that machine reasoning has
some use outside the laboratory (in fact Bill Andersen changed my mind about
the utility of reasoning when I saw one of his demos). We didn't intend
IDEAS for that purpose though, nor was 15926 developed with the orc-slayers
in mind. (07)
Cheers
--
Ian (08)
-----Original Message-----
From: Ed Barkmeyer [mailto:edbark@xxxxxxxx]
Sent: 04 February 2009 20:14
To: ian@xxxxxxxxxxxxxxxx
Cc: '[ontolog-forum] '
Subject: Re: [ontolog-forum] RDF & RDFS (was... Is there something I
missed?) (09)
Ian Bailey wrote: (010)
> When I say semantics, I am referring to stuff in the real world. (011)
Is this an allusion to the "intension" v. "extension" discussion? (012)
When I say "semantics" I mean the concepts that are intended by general
terms and the individuals that are intended by "names". The concepts
become "stuff in the real world", when I fix a present world of interest
-- e.g., "order" becomes the current outstanding orders, and "shipment"
becomes the related shipments, and possibly any other unexpected box
that shows up on our loading dock. (I think of that as the conversion
from "intension" to "extension", but I surely don't want to get into the
level of concern in Chris Menzel's treatise.) (013)
RDF defines a grammar with a formal interpretation of the sentences that
conform to that grammar. That formal interpretation is the "semantics"
of the RDF constructs. And it is "formal" in two ways: (1) it is
well-defined for the purpose of doing certain kinds of reasoning; (2) it
makes no interpretation of the terms you define, other than that you
define them. RDFS adds a set of terms with their own formal
interpretations ("semantics"), thus adding to RDF a richer base language
with terms that are well-defined for the purpose of more kinds of
reasoning. (014)
> When a
> computer scientist says "semantics" they are usually, as far as I can see,
> referring to structure of data. They get the two things mixed up a lot,
but
> that's probably because they've played too much World of Warcraft. (015)
My job title says "computer scientist", and I don't do any of the above.
Other persons called "computer scientists" may do all of them. As Ben
Franklin observed, you can call an ox a bull, but he would only like to
have the missing requisites. (016)
> There are
> no REAL WORLD SEMANTICS in RDF and RDFS, and probably not in OWL either. (017)
Whatever that means, I'm sure it's true. (018)
> None of these languages has criteria for extent (at least not real-world
> extent), so I am free to use them to describe what I choose, and have done
> so with gay abandon. (019)
Absolutely. You can talk about 2-headed ogres if it please you. (020)
> I get some benefit from descending my type from
> RDFS:class because there is some set-theoretic stuff in there I can
re-use. (021)
Uh, yeah. The set-theoretic stuff is part of the _semantics_ of RDFS:class. (022)
> But...and this is important...being set-theoretic doesn't make it
semantic.
> I can have nonsense sets that refer to nothing in the real world. (023)
Of course. You can also have quite sensible sets (terms) that refer to
no individual in a given world of interest. My company may have no
outstanding orders (at present). (024)
> If you're prepared to play fast and loose with RDF and RDFS, you can do
> things like having first-order representations of higher-order ontologies
-
> i.e. you don't use the rdf:Type for types of types, you simply use a new
rdf
> property for that. (025)
Of course. It is an extensible language. You can add terms and give
them such meanings as you will, as long as those meanings are consistent
with the meanings of the RDF concepts that are used to define them. You
can create any concept you like and call it an RDFS:class, as long as it
doesn't violate the axioms of RDFS:class, like "the set-theoretic stuff". (026)
> Silly, I know, but the inference wizards of Warcraft like
> everything to be nice and flat. Something to do with finite computation
> time...though taking 24hrs to make an inference with all the deductive
power
> of a red setter *is* acceptable in flat world. Not sure it'll ever fly in
> business, mind you. (027)
Actually, the inference wizards build reasoning engines that use the
established semantics of the base language, together with accepted ideas
like 'modus ponens', to draw conclusions. We use the languages so as
to be able to use those engines to get those conclusions. (028)
But you can certainly use the language for other purposes. You can
build your own reasoning engine, thereby promoting yourself to wizard,
if you have enough experience points and gold, that implements the
additional semantics of your extended language, in order to draw more
conclusions than the off-the-shelf engines. Ph.D. students do this all
the time. (029)
> None of this should be a surprise. (030)
It isn't. (031)
> I will say this
> though - it's really easy to confuse the representation of the ontology
with
> the ontology itself. (032)
We need to be careful about the term "ontology" here. I understand the
term "ontology" to refer to a representation of knowledge in a form
suitable for automated reasoning (i.e., using some well-defined grammar
and base semantics). (033)
With that definition, it is not easy to confuse them -- the ontology IS
the representation. The ontology is not the knowledge, or the knowledge
model that is in your head; it is that knowledge (model) as captured in
the language. It is that part of the knowledge that is actually being
communicated to the automata. (034)
Whether the ontology may also serve to convey more of the knowledge and
your internal knowledge model to another human is a separate question.
That can happen when you put a lot of 'documentation' text in the
ontology that is meaningful to the human but unparseable to the
automaton. It can also happen when the human shares much of the
connotation of the terms that is not actually specified in the ontology. (035)
If by "ontology", you mean the knowledge (model) you have in your head,
that is out of the realm of computer science and software engineering. (036)
> If I decide I'm going to tattoo my ontology in barcode
> on my left buttock, that is also an acceptable approach (technically I
mean,
> not socially) provided I have documented how the barcode/arse-cheek
> combination maps onto the real world. (037)
No, it isn't. What is required is that the meaning of the barcodes is
well-defined for the automaton that can read the barcodes off the
buttock and do something with them. How they map to any "real world" is
your problem. That is how you take the results provided by the
automaton and convert them to knowledge in your head. (038)
> This is what we do when we profile UML
> for IDEAS (as you correctly pointed out). What I think you missed is that
> you can do almost exactly the same thing with RDFS. The first thing we did
> was create instances of rdfs:Class for all our IDEAS ontic categories.
From
> that point on, we don't use any of the RDF elements in our encoding. We
may
> or may not subtype our type-instance relationship from rdf:type (depends
if
> we want to keep the level five orcs happy) and we do subtype ideas:Type
from
> rdfs:Class, but this is an engineering decision to allow us to leverage
the
> wealth of open-source stuff that's out there. (039)
And that "engineering decision" is a part of what is called "knowledge
engineering". When you finish all of this, you have a knowledge model
captured in a language for some kind of reasoning engine. The 5
open-source stuffed orcs are the engines that are "happy" enough with
that input to produce some useful output. (040)
I think Barry's point is that your IDEAS RDF language is so massively
extended and possibly so weakly defined with respect to interpretation
semantics that no one, including you, has any idea what kind of
reasoning engine could actually process it in a non-trivial way. (041)
You can write knowledge in English or Chinese or Aramaic. What makes
written knowledge an "ontology" is that the language has a grammar and
an interpretation of the grammatical constructs that is suitable for
automated reasoning. If most of the desired reasoning depends on your
interpretations of constructs you introduced, that can't happen unless
you build the engine. Without that engine, you have an ox, not a bull. (042)
-Ed (043)
--
Edward J. Barkmeyer Email: edbark@xxxxxxxx
National Institute of Standards & Technology
Manufacturing Systems Integration Division
100 Bureau Drive, Stop 8263 Tel: +1 301-975-3528
Gaithersburg, MD 20899-8263 FAX: +1 301-975-4694 (044)
"The opinions expressed above do not reflect consensus of NIST,
and have not been reviewed by any Government authority." (045)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (046)
|