Mike Bennett wrote: (01)
> I also just realized I made a mistake - I was confusing owl:sameAs with
>owl:equivalentClass. Sorry about that. (02)
So did I. Usually I remember to just list both when I am talking about
synonyms. We were both thinking 'synonym', which is a linguistic idea
associated with (some) common semantics, but as Simon pointed out, the
difference in the formal semantics (of OWL) is significant. (03)
> I find the term "term" a little slippery in its many applications. Presumably
>then in Linked Data "term" refers to a label (with a URI) that denotes an
>individual in the world or a fact about that individual? Or does it also refer
>to TBox assertions about a class of possible individuals? Elsewhere we tend to
>use the word "term" in the latter sense almost exclusively. (04)
As I said to Matthew West, I dislike the idea 'possible individual', but I
think what Mike means is "a class which may possibly have individual members in
a given UoD." (05)
Now, except for the mis-definition in SBVR (in which a 'term' is the
relationship between an expression and a concept), I think we can all agree
that a 'term' is a linguistic expression in the role of reference to a
<something>. In formal terminology, a term is said to *connote* a general
concept (an intension, a predicate), which may or may not be what is meant by
"class", and to *denote* its members -- the things in a given UoD that satisfy
the predicate. And in our speech, both formally and informally, we use both of
those relationships. In describing a Tbox, we are actually saying things about
BOTH the nature of the intension (the connotation) and the properties of each
individual in its extension (the denotation). That is, the term 'term' is not
"slippery"; it is understood to have two simultaneous notions of 'referent'. (06)
In natural language, we use other linguistic elements to distinguish the
connotation from the denotation. If we say "the concept 'snark'" or "the class
'snark'", or something the like, we are clearly restricting the term to its
connotation. And when we quantify it, "two snarks walked into a bar...", we
are restricting the term to its denotation. And in cases that appear to be
definitional: "A snark is a boojum", we are probably using the term with both
referents: the class is a subclassOf, and "a" (any given) instance is also an
instanceOf. (07)
In formal languages, if a 'term' can only refer to one 'thing', then you may
have to decide whether that 'thing' is an individual or a class, as in OWL, but
not in RDF. Further, formal logic (and other) languages may have issues about
'namespace segregation': whether everything that can be referenced by a 'term'
has to be in the UoD (which includes the question of whether a "class" can BE
an "individual"), and whether the occurrence of an expression in a given
syntactic position is a term in the 'class namespace' or a term in the
'individual namespace', etc. So we have to be careful to distinguish the
library science view of 'term' from the computational linguistics view of
'term'. (08)
> So, what does the duality of denotation and reference mean when it comes to
>re-using existing ontologies? (09)
By "reference" here, Mike means 'connotation'. The whole point of the above is
that 'reference' could be either or both. And in LOD land, which thinks in
RDF, there is no distinction between an "individual" and a "class" -- they are
all "things" and an RDF 'term' refers to exactly one of them, whether it
"exists" or not. It is the ultimate forgiving Universe of Reference. This
makes it possible to say all kinds of things, without many formal restrictions.
In RDFS and OWL, you can formally interpret a 'term' as a signifier for a
"class", which is itself a subclassOf "thing". But imposing RDFS and OWL and
N3 and ... (the Semantic Web technologies) on Linked Open Data is still very
much the research issue we are discussing. (010)
> Are OWL ontologies which have been optimized for reasoning applications, more
>or less useful as a point of reference for linked data? (011)
That is exactly the research question, but probably not so generally stated.
The question is whether OWL ontologies optimized for reasoning are (have
proved) useful in SOME LOD applications, and if so, whether we have learned
anything of value to the overall discipline FROM those applications. (012)
> For instance do you find that those ontologies end up with fewer assertions
>that can be linked directly to some ABox term? (013)
Sorry, Michael, but I can't figure out what this is a measure of (?) or a usage
recommendation for (?). (014)
> ...
> Horses for courses - so, what would it take to put some useful language
>around the parameters of those courses and how to pick the right horse? That
>would move things along from the "A is right!" "No, B is right therefore A is
>wrong!" type of conversations that we sometimes see. (015)
+1! I think what I said above following "the question is..." is the
+generalized version of 'picking the right horse', but Mike is narrowing it to
+recommendations for OWL ontology development practice. (016)
> Best regards, (017)
> Mike (018)
Best,
-Ed (019)
_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2014/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2014
Community Portal: http://ontolog.cim3.net/wiki/ (020)
|