(where I have reddened the parameter
list). This is already processed nicely by all kinds of services, but the
various services are running structured object definitions in their own
vocabulary, as implemented by the HTTP CGI components, ActivExen, or even
custom processes tailored to specialty representations used in specific
So why is the extra structure necessary?
It is all based on the (doubtful) concept that one URI references one resource,
and that everyone uses that same set of URIs. That could just as well be
implemented now, within the current structure, without inventing new layers of
processing, without adding new capabilities. Just get everyone to agree to use
the exact same URI for every object – limiting the set of URI to any
enumerable set that DNS servers can process. Why deepen the syntax of HTTP based
communications? What benefit does that deepening bring which is so persuasive
to its adherents? And why would a business make its service available through the
SW instead of with its own server at no extra complexity or cost?
I’ve only seen the arguments in
favor of goodness (happy semantic example stories), but no persuasive arguments
have been made, AFAIAA, as to why the change in direction to RDF, OWL, etc, gets
us there any faster than the existing XML practices.
Cost Tradeoff Justification URLs
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2
On Behalf Of Peter Yim Sent: Wednesday, October 27, 2010
4:38 PM To: [ontolog-forum] Subject: Re: [ontolog-forum] Webs
Well done, Kingsley ...
thank you! =ppy
-- [RC] I agree
( bla bla, compete with data base technology with simplicity..)
TimBL discussed how you publish (actually inject) Linked Data into the
Web by constructing hypermedia resources in the manner prescribed above. This
is a subtle, but extremely important point re., Linked Data comprehension,
which starts by understanding its place in a broader technology innovation
continuum that covers -- data access, data representation, data integration,
and data management.
I prefer to define Linked Data as hypermedia based structured data .
Meaning: you represent a calendar using structured data representation e.g.,
using iCalendar syntax that produces a ".ics" resource 
which may or may not be HTTP addressable, referencable, or accessible. As
already outlined re. iCalendar, you could do the same using one of the syntaxes
associated with RDF  e.g., HTML+RDFa, RDF/XML, Turtle, N3, NTriples
etc... You could also do the same thing using OData (an Atom+XML based markup
from Microsoft), GData (Atom+XML based markup from Google), even a CSV file
(where you simply stick to 3-tuples + use of "<" and
">" to indicate reference data i.e., NTriples-like) etc.
2. What does this do more than a sql query does on a
database? Semantically nothing other than the facts SQL query
accesses a database objects and this accesses web objects or stuff..
Here what a SQL based RDBMS won't deliver, implicitly:
1. Reference values -- most relational DBMS engines don't support reference
2. References that resolve to structured data representation -- a Foreign Key
doesn't implicitly resolve to a relational table (or view) that projects a
union all its Referents (primary keys and dependent columns); of course you can
code such functionality or implement an RDBMS hybrid e.g. Object-Relational
engine that delivers this feature via ref and deref functions as SQL
Here is what Linked Data adds to the mix, implicitly:
1. Reference values -- via URIs (#2 of TimBL's meme with HTTP specificity, but
this can apply to any URI scheme if you have a custom resolver)
2. References that resolve to structured data representation(s) -- #3 of
TimBL's meme which is biased towards RDF as the W3C standard syntax for
structured data representation and SPARQL as the mechanism for implicitly
binding HTTP URI based Entity/Object Names to a Resource Addresses (URL) that
resolves to Structured Data Representation(s).
The elegance of HTTP makes the representation of structured data negotiable
SQL query is a data manipulation language ---? DML...
3. My question is: Is that all you want to do with Semantic
Web? If so, may be we are done .. Time B L told us
We want to Refer to Entities/Objects across a global InterWeb space. We want
these References to emulate pointers which have existed since the inception of
When we de-reference a pointer, we want to have the option to choose (via
negotiation) how the structured data is represented. Our base (not sole) data
model is an Entity-Attribute-Value or Subject-Predicate-Object graph -- which
ultimately boils down to FOL based conceptual schema.
Thus, the InterWeb (many resolvable URI schemes rather than HTTP solely)
becomes a distributed database (that plugs in data spaces hosting
heterogeneously shaped data). A database system that also ultimately exploits
and demonstrates the fact that Relational Tables and Relational Graphs all sit
atop the same conceptual schema (FOL based). A global space where I can browse
data, realize my own limitations, and then (if need be) send agents out on my
behalf to continue navigation and discovery, bearing in mind its understanding
of my preferences and human limitations.
Bur if you want to actually do something more
like processing and computing data in a complex manner, does this allow
us to do that? Nothing I have read so far tell me that it is
capable of doing so.
SPARQL can do that, while SPASQL (SPARQL + SQL hybrid) can do more.
When the world-view is known a SQL RDBMS (at the current time) will trump a
pure SPARQL + RDF DBMS. There are benchmarks  that have proven that
When deal with reality though, where world views manifest unpredictably and
data shape is volatile SPARQL + RDF DBMS will run rings around a SQL DBMS.
Thus, the optimal solution (as far as I know) is a hybrid solution, especially
one that combines RDBMS and Graph Model DBMS advances .
Let me tell a story. Oracle supported Data Manipulation language,
data definition langugae and Data control language. But all the
processing code had to be written by the front end application, middle
wear etc. Later on they supported some event driven processing
using the concept of triggers and stored procedures.. This is a
procedural / event driven concept implemented using data base
technology. ( a little convoluted, but competed with procedural
Yes, and the end product no matter how you cut it only works within
There are procedural languages that actually allow complex computation.
Java is one of them.. Either one can explore the
concept of procedural languages that are already in use for web
development, or use linked data concept and add on baggage later ( like
Yes, and the end product no matter what is language locked.
I think it is simplicity with sagging, heavy baggage attached.. that will
come later on..
No. Let's revisit TimBL's meme re. hypermedia based structured data,
plus my subtle tweaks that put Linked Data back into a palatable innovation
continuum that reflects pre Web reality and history:
1. Objects have Identity via URIs
2. URIs Resolve to Representations of their Referents
3. Data Object Representation is negotiable (so RDF and many other approaches,
rather than RDF solely) -- HTTP facilitates this elegantly
4. Data Representation is separate from its underlying Conceptual Schema
5. Hypermedia Resource construction should leverage expanse of InterWeb via
URIs when referring to related data.
1-5 will give us a dense web that that lends itself to precision find (rather
than fuzzy search) and serendipitous discovery of relevant things based on
individual context preferences. This is what Linked Data truly enables IMHO.
On 10/26/10 12:38 PM,
John F. Sowa wrote:
> There's an interesting new language and system designed for secure,
> distributed computing. The language, called Jif (Java +
> Flow), extends Java with "policies", and the system is called
> because "Fabric is more useful and more tightly connected than
> See below for references to Fabric, Jif, and related articles.
> But the main point I want to make in this note is the contrast
> between the methods for developing Fabric and the Semantic Web:
> 1. The SemWeb began with an inspiring, but rather vague
> by Tim B-L about adding semantics to the
URIs of the WWW.
> At that level of detail, nobody could
> 2. The W3C, which met for the first time at the 1994
> where Tim gave that speech, took charge
of the design and
> development of the SemWeb.
> 3. Like any design by committee (cf. Fred Brooks' book),
> SemWeb was pulled in different directions
by experts with
> competing visions of the goals,
technology, and methodology.
> 4. As a result, the only consensus on architectural
> the familiar layer cake, which emphasized
syntax over semantics.
> 5. The most widely used technology that came out of the
> was the lowest common denominator with
the barest minimum of
> semantics: RDF.
> 6. The components above the RDF level have not been
> with each other or with the mainstream of
IT software, and
> very few IT developers have found any
reason to use them.
> I don't know whether Jif and Fabric are going to be more successful,
> but their approach is the best way to develop a major new design:
> a small group doing focused research with prototype implementations
> to check how and whether the ideas work in practice.
> Doing advanced R& D in a small group (or "skunk
works") has always
> been far more successful than design by committee. As a classic
> example that succeeded beyond the designers' wildest dreams, see
> the Oak project at Sun, which became Java:
> As Yogi B. said, "Prediction is very hard, especially about the
> But I don't believe that any of the current components of the SemWeb
> are going to survive without a total overhaul or complete replacement.
> Instead, we can expect some small group working in skunk-works mode
> to produce a truly Semantic Fabric.
I agree with the general "skunk works" theme 100%, but Java isn't a
great example today IMHO. Lots of bloat in Java land (codebase and
Linked Data (hypermedia based structured data) and the Linked Open Data
community are "skunk works" examples that emerged from the larger
somewhat maligned Semantic Web project :-)