Rick, (01)
Thanks for the comments. In my slides I put an extremely high
priority on supporting legacy data -- which, unfortunately,
includes RDF. I agree with the following point in your
citation [3] ( http://bit.ly/c3nxTa ): (02)
RM> Linked Data is THE platform for internet-scale collaboration. (03)
But in practice, what that implies is URLs (not even URIs)
intermixed with raw, zero-semantics data. RDF is just a
cumbersome kludge for expressing Comma Separated Values.
Even Tim B-L now uses the term 'Web Science' without the
word 'Semantics'. (04)
Google supports RDF in the same way that they support all
legacy formats. But for their own tool kit, they use JSON.
The basic JSON format is CSV enclosed in braces: (05)
{A, B, C, D} (06)
But they can also add semantic tags to the items: (07)
Tag1: {Tag2:A. Tag3:B, Tag4:C, Tag5:D} (08)
That's still a low level of semantics, but it's a step up
from what most LOD systems are using. And it's vastly more
readable for humans and more efficient for computers than RDF. (09)
Common Logic supports JSON, RDF, OWL, SQL, or any other popular
notation in an upward compatible way. It also accepts URIs
(or any other kind of identifiers) as names. (010)
RM> I still regularly attend Semantic Web community events. Over the
> past few years, as the community has grown the experience level
> of members has gone WAY down. The expanded market is generally
> good news and the folks showing up recently are smart and eager
> to learn, but are just getting their first exposure to semantics. (011)
Yes, but that's what you get when you expand the market before
you have tools that can demonstrate an advantage from semantics. (012)
Even the academic conferences have suffered. Many people have
observed that the quality of papers presented in the 1990s was
superior to the quality in the 2000s. Some academics get the
impression that any garbage written in OWL is an ontology. (013)
RM> It looks like RDF is getting some uptake. (014)
RDF is just a smoke screen, and the LOD servers accept absolutely
anything to package the URLs + zero-semantics data. They even
support spreadsheets and CSV on the same basis as RDF. That
implies that semantics has reached the lowest common denominator.
(Actually 0 is only acceptable as a numerator.) (015)
RM> ... Recently, the material that most presenters cover does
> not even include RDFS inferencing. That would be considered
> too advanced for today's mainstream audience. (016)
I would contrast that with relational DBs. I used to call SQL
the world's worst notation for first-order logic, but that was
before I saw RDF. But at least the SQL WHERE clause can represent
full FOL. Most programmers only learned a minimal amount of SQL
to extract whatever data they wanted, but the ones who needed
more gradually learned a much larger subset. (017)
In that sense, SQL succeeded in making FOL the world's most
widely used version of logic. The Semantic Web, by contrast,
took us back to the semantic level of CSV in the 1950s. (018)
RM> I understand that HETs is operationalized through a second
> order polymorphic lambda calculus with type equality conversions
> based on the Girard-Reynolds isomorphism. Its deductive system,
> at least in part, relies strongly on intuitionism and the Curry-
> Howard-Lambek correspondence. That's sounds quite a bit different
> than FOL. (019)
It's a huge superset, most of which is irrelevant. I consider CL
(plus IKL for some purposes) to be a realistic superset that is
already an ISO standard. I have no objection to using tools that
are even more powerful, but I don't expect that extra power to
be used, at least not in the first few iterations. (020)
(And by the way, implementations that claim to use HOL really use
HOL in the same way as Common Logic: they use a first-order model
theory that permits quantification over functions and relations.) (021)
RM> Slide 81 says "Enable subject-matter experts to review, update,
> and extend their knowledge bases with little or no assistance from
> IT specialists." I'm not seeing much evidence that this will ever
> work. Maybe the SMEs I know aren't very smart. (022)
Some SMEs are very smart about their own subject, but they need
good tools. In fact, they're much better qualified to produce
a decent ontology about their subject than a typical programmer
who just learned OWL. (023)
In the 1990s, Doug Skuce at the U. of Ottawa got good results
with his ClearTalk system (type "skuce cleartalk" to Google).
His daughter used it for a project in her 6th grade class, he built
a Unix help facility with it (and which his students who were
learning Unix both used and extended), and a couple of professors
at U. of Ottawa built fairly nice KBs for their classes. But
such front ends only catch on when they're connected to back ends
that the users need to do their day jobs. (024)
At VivoMind, we've had very good results in getting our clients
to extend our basic ontology for applications that process raw,
untagged English texts. We have some tools for extracting the
key words and phrases for a domain-dependent ontology, and the
SMEs use CLCE to correct, refine, and extend that ontology. (025)
The process of getting the SMEs to extend their ontology is much
easier to do than requiring humans to add semantic tags to text.
That is far more onerous, and much much less likely to work. (026)
By the way, did you try that URL for getting FCA lattices out
of Roget's Thesaurus? Such automated tools are more likely to
work than anything that requires programmers to write ontologies.
See slide 73 or go to http://www.ketlab.org.uk/roget.html (027)
John (028)
_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/sio-dev/
Join Community: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/sio-dev/
Unsubscribe: mailto:sio-dev-leave@xxxxxxxxxxxxxxxx
Community Shared Files: http://ontolog.cim3.net/file/work/SIO/
Community Wiki:
http://ontolog.cim3.net/cgi-bin/wiki.pl?SharingIntegratingOntologies (029)
|