(Did you mean to take this off-list? I presume not, given the first line.) (01)
>Two points that Pat Hayes may have overlooked in his response:
>
>The original comment:
>> >[PC]
>> > Interoperability creates its own additional imperatives,
>> yes. I am
>> >convinced that **accurate** interoperability (getting the same
>> >inferences from the same data) requires use of a common foundation
>> >ontology
>>
>
>(1) [PH]
>> As (some variation of) this view is widely held or simply assumed as
>> obvious, allow me to present the case against it. Because if I'm
>> right, our future may be a lot easier and less contentious, and we
>> might be able to support serious interoperability without needing to
>> persuade and agency to fork out $20M and then trying to persuade the
>> entire planet to agree to use the result (which will never happen.)
>>
>
> The whole point that continues to be missed over and over again is
>that an effective standard does not have to have universal agreement,
>just a large enough user base so that **some** third-party vendors will
>find it profitable to build utilities to make the standard easier to
>use, and will implement it in their products. (02)
Point taken. But consider. Any kind of adoption requires some payback
or advantage for the adoptees. What is the reward in adopting such a
large ontological framework? If an entire community does, they
presumably get advantages of being able to communicate better. There
is a very good chance, however, that this advantage would be
increased by their publishing the ontology for everyone to use , so
that they can also communicate with others. Now, how does this differ
overmuch from the vision I was putting forward, the SWeb idea? If
they publish their giant ontology in small pieces, it doesn't differ
at all. Suppose they publish their best attempt at a spatial
sub-ontology, but an EEC group publishes a clearly better one.
Wouldnt they be smart to simply re-use the concepts from it and
deprecate theirs? Its lower cost then rewriting and maintaining a
rival ontology. (This kind of thing is already happening on the
SWeb.) And there is a real danger, if they adopt a 'closed group'
attitude, of rival groups forming and the interoperational problems
being made worse rather than better (think blu-ray.) (03)
> Getting a large enough
>group of participants and a proper design of the project would assure
>that the result would get a lot of testing. (04)
There are powerful forces working against you, however. The larger
the group, the larger the technical committee. If this gets to be
more than about 40 participants, it is dead in the water: it is
almost impossible for a technical group this size to come to a
consensus even on something as dry and narrow as a notation, let
alone an ontology. The RDF WG was about 15 active folk, and got its
job done in two years. Webont was more like 30 and never resolved
some of its issues, so there are three OWL dialects and another one
on the way. The Rules WG is more like 50 people and will never come
to a consensus except by wearing people out by attrition. Its not as
though we might discover that there aren't serious, deep,
irreconcilable divergences of opinion in writing ontologies. We KNOW
there are: we've all been arguing about them for decades. There are
published standards which take opposing views on some of these
issues. What makes you think that a consensus can possibly be
reached, even if a group is picked at random? (05)
>Perhaps that will not
>suffice, but it is a strategy that has not been tried and should be. (06)
It has been tried, I would say. CYC and IEEE-SUMO are both attempts
at a super-overarching ontology. (07)
>
>(2) [PH] >
>> Of these, we have already created (1) and (2), at least in a beta
>> form, in the current state of the semantic web standards suite. This
>> cost a lot less than $20M as it was largely done by volunteers (who
>> were in many cases 'lent' by their commercial employers for good
> > commercial reasons.)
>
>The credible estimates I have seen have placed the cost nationwide of
>lack of semantic interoperability at about $100 billion per year -
>which at <1% of GNP is plausible in an "information-driven economy".
>
>If the effort at agreeing on a common foundation ontology that was
>organized by Bob Spillers in 1994 had been approved by DARPA we might
>well have saved over a trillion dollars by now. (08)
Not a hope in hell. I was at that summit meeting in Heidelberg, and I
tried for about a year afterwards to host an email discussion to
resolve some of the issues. We had thrashed out an overall breakdown
into main topics. But we never did reach any kind of consensus, and I
don't think we would have it even now if we had formed a group to do
it. It would have been terrific fun, but it wouldn't have produced a
standard. (09)
>Does it not seem reasonable to make some plausible efforts to gain wide
>agreement? If we are losing $300 million per day in lost efficiency,
>why does $20 million seem like such a large amount to make a serious
>effort to solve the problem? (010)
Who are you appealing to? "The economy" isn't an agency. DARPA is
quite firm in its view that writing standards is not its business.
Ive been trying to get $$ for years from other US agencies, with
little piddling results every now and then. The EEC just spent a huge
amount on Sem Web and ontologies, and its my guess that pretty soon
there will a widespread disillusionment with lack of substantial
progress, followed by an ontology funding winter (like the AI winter
of the 80s when the military discovered that they weren't going to
get intelligent guns in the near future). But look at how the
internet and the Web came into being: nobody mandated them, there
were no Manhattan-Project-style setups needed. Like Topsy, they just
grew, because they could and because people found them increasingly
useful, basically to make money. (011)
> Every lost day is a ten-fold loss over
>the cost of a direct project to solve the problem!
>
>The IKRIS project was, from what I have seen, excellent and the results
>reusable, but limited. (012)
Not all problems solved, of course, but how limited? I know now how
to map between context/modal axiomatic styles, state-based temporal
descriptions, continuant/occurrent talk, and 4-d modelling. The ideas
involved are basically quite simple, and can be summarized as recipes
plus user guides which explain the different 'ways of thinking'
involved. This covers quite a lot, I think most, of the temporal
ontology axiom styles that have ever been used. This feels to me like
serious progress. (013)
Pat (014)
>
>Pat
>
>Patrick Cassidy
>CNTR-MITRE
>260 Industrial Way West
>Eatontown NJ 07724
>Eatontown: 732-578-6340
>Cell: 908-565-4053
>pcassidy@xxxxxxxxx
> (015)
--
---------------------------------------------------------------------
IHMC (850)434 8903 or (650)494 3973 home
40 South Alcaniz St. (850)202 4416 office
Pensacola (850)202 4440 fax
FL 32502 (850)291 0667 cell
phayesAT-SIGNihmc.us http://www.ihmc.us/users/phayes (016)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (017)
|