[Top] [All Lists]

Re: [ontolog-forum] Current Semantic Web Layer Cake

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Frank Guerino <Frank.Guerino@xxxxxxxxxxxxxx>
Date: Sat, 04 Aug 2007 20:57:17 -0400
Message-id: <C2DA98AD.100AA%Frank.Guerino@xxxxxxxxxxxxxx>
Hello All,

I’ve been watching this thread for a while and figured I’d throw in some extra info.

My background is that I used to run Architecture and Engineering in some of the largest and well known financial firms in the world.  I now run a firm that builds Software as a Service (SaaS) solutions that wrap up Web 2.0, Semantic Web (Web 3.0), SOA, etc. to deliver solutions to enterprises through the internet.  In light of this, I hope my contributions are found useful.

The Definition of Semantic Web

First, for those who may not be familiar with what Semantic Web (a.k.a. Web 3.0) is, it is the term created by Tim Berners-Lee, the man who invented the (first) World Wide Web.

Semantic Web (or Web 3.0) is a place where machines can read Web pages, very much the same way humans read web pages (processing data and information, interpreting context, and creating knowledge). Semantic Web is a place where search engines and software agents can more effectively crawl, index, understand, and interact with the Net and find, manipulate and understand what we're looking for. It's based on a set of standards, such as the Relationship Descriptive Format (RDF) and the Web Ontology Language (WOL but many times referred to as "OWL"). Fundamentally, it implies that the web, itself, will act as an endless database that is linked by many systems that know how to leverage this structure and interact with each other.

NOTE: It is also assumed that Web 3.0 will incorporate all the traits of Web 2.0, as defined by O'Reilly in the article: http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html

How this all applies to the so called “Layer Cake”...

Depending which Chief Architects, Enterprise Architects, CIOs, and CTOs you talk to, there are mixed feelings about whether or not such a vision (Web 3.0/Semantic Web) will actually come to fruition in our lifetime.  The “layer cake” is a very simplified description of a vast and extremely complicated environment that is composed of endless permutations of standards, best practices, protocols, and custom solutions.  The reality is that it is arguably over simplified.  In order for the Semantic Web vision to come to fruition one of two things must occur:

  1. The world must find a way to leverage all of the above, simultaneous (highly improbable) or
  2. The world must consolidate to a single set of standards, protocols, etc. that are adopted by all (also highly improbable, but slightly more probable than #1).

#1 is highly improbable simply because most vendors and individual humans will continue to build new solutions and “evolve”, making the landscape even more complicated and cluttered than it already is, today.  Also, the existing HTML landscape that makes up the majority of the world is composed of content that is far too “unstructured” to be used, in its raw form, as an effective data model that can be used as the data backbone for inter-system communications.

#2 is also highly improbable because for consolidation to occur, the standards, protocols, etc. that supercede “everything” that is in place today will have to find a way to penetrate the landscape in an almost viral manner, such as HTML and HTTP did to dominate the landscape.  Since history shows that it has happened before, there is a foundation for the belief that it can happen, again.


In my own opinion and based on our experiences (mine and those of many of the peers I interact with), I think #2 can happen but I highly doubt it will happen with the current Semantic Web standards that are in place, today.  I say this because we use these standards (and/or pieces of them) in what we offer, through our own firm.  We can say that they are both incomplete and somewhat flawed (yet better than nothing).  We also see the reality that most vendors don’t work and play nice with each other and we see that most people trying to solve the problem are trying to solve an untraditional problem using traditional methodologies.

A very interesting thing to me is that most people are off trying to solve things like the relationship description formats, the definition of what an ontological structure should be, the protocols, etc. but we see no one actually building the base dictionary that will be the foundation for language.  It’s almost as if people believe the dictionary will “magically” show up one day or someone will create a system that will synthesize the dictionary, from scratch.  Based on our own research and implementation, I believe that the dictionary is the very foundation for success.

We ourselves, at TraverseIT, implement pieces of Semantic Web in our own platform and it definitely adds some value (or we wouldn't do it). However, how much value it adds is still far short of the master vision.  We do see “potential” for a grander version of what we jokingly call “SkyNet” (for those of you that remember “The Terminator”), where we can tie independent instances of our implementation, together, to form a unified and dynamic “net”, where data and communications can be shared across independent instances.  If interested, you can see some of the things we do, like a few of our advanced visualizations using RDF as the baseline, at: http://www.TraverseIT.com/TraverseITUIHome.html.

Will other companies jump in? I don't know for sure. I can say that if you follow the definition of Semantic Web, you will find very few enterprises formally adopting the standards. I think you will find, over time, that more companies building solutions on the web will follow Web 3.0 in spirit than to the letter. I believe the biggest issue is that there are just so many systems in the world that are not built to be inter-operable and the probability is just so high that they won't be converted. A bigger problem is that there is no structure to content on web pages, themselves. So, for them to be useful data sources, they have to be "understood" by the systems that leverage them. Realistically, the ability to understand the data/information/context/etc. in web pages will have to either come from proactive structuring of pages, "before" publishing, which I don't see humans doing, or by natural language tools that can determine such things by what they read, and it doesn't look like the software industry is anywhere close to this, yet.

Anyhow, I hope this helps.

My Best,

Frank Guerino, CEO
908-294-5191 Cell

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>