ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] AJAX vs. the Giant Global Graph

To: edbark@xxxxxxxx, "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Kingsley Idehen <kidehen@xxxxxxxxxxxxxx>
Date: Tue, 30 Mar 2010 19:10:15 -0400
Message-id: <4BB284D7.4040901@xxxxxxxxxxxxxx>
Edward Barkmeyer wrote:
> John,
>
> I think we are largely in agreement.  Multiple technologies for 
> aggregating data of various kinds and creating information for a 
> specific viewpoint are a good thing.  AND... when we have multiple such 
> technologies, we have to be able to integrate their outputs or their 
> agents, to create the next level of information.
>
> Some minor interjections...
>
> you wrote:
>   
>> EB> The Semantic Web approach is to capture the referential knowledge
>>  > formally and derive it in a trusted way.
>>
>> JS>I fully approve of that goal, but there are many more issues involved,
>> than just the use of XML and URIs by themselves.  The fundamental issues
>> are semantic and pragmatic.  Syntactic mechanisms, by themselves, can be
>> more of an obstacle than a foundation.
>>   
>>     
>
> Semantic and pragmatic issues are common to both approaches.  The formal 
> capture of semantics depends on the primitives, and John has long been a 
> proponent of tested upper ontologies.  OTOH, the AJAX conversion of raw 
> data to useful information in a view also depends on the assignment of 
> semantics to the raw data elements, and that rarely benefits from any 
> kind of clear specification.  (NIST has a whole missionary effort with 
> respect to the semantics of measurements.)
>
> Syntactic mechanisms, however, are critical to both methods -- syntax 
> regulates the form of expression, and expression is the only means of 
> conveying information.  I don't think of syntax as being foundational in 
> any sense, but it is only an obstacle when it is inadequate to the task 
> of conveying the intended information.  I think what John has in mind is 
> that syntax is often constrained to be simple enough to support a given 
> processing algorithm, and that objective conflicts with its ability to 
> convey the intended information in some, perhaps most, uses.  Natural 
> languages, OTOH, have very complex and perhaps ill-defined syntax, which 
> makes the problem of interpreting them much more difficult.  Put another 
> way, the lack of well-defined syntax is always an obstacle.
>
>   
>> EB> I fully agree with the idea that we need to "combine [RDF-annotated
>>  > information] with other kinds of information", but there are two ways
>>  > to do that -- derive the semantic markup for the other kinds, or link
>>  > them by statistical association.
>>
>> JS> Depending on how you count and what you count, there could be many more 
>> ways.
>>   
>>     
>
> Agreed. I overstated that.  We were discussing only two approaches -- 
> AJAX and GGG.  And upon reflection, I don't think the AJAX approach is 
> intrinsically bound to the Google mechanism for making linkages.  In 
> fact, GoogleMaps is probably a counterexample -- there is a conceptual 
> schema that is used by the interpreters to markup the data.
>
>   
>> EB> All the data that is used by AJAX methods is provided by specific
>>  > HTTP-accessible services on the servers.  The data is web-accessible..
>>
>> JS> That is trivially true for that part of the processing that is done
>> in JavaScript on the client side.  But there is no such restriction
>> on the server, which can do anything with any resource it owns.
>>   
>>     
>
> My point was that this is not only true of the "server". 
> We have a general architecture for the AJAX process as a set of functions:
>  - obtain the source data sets
>  - convert each source data set to a reference form
>  - convert the reference forms to a working repository of 'integrated' 
> information
>  - develop the view from the repository information
>  - display the view in the browser
>
> We can deploy these functions in any number of 'component' 
> configurations.  The functions don't necessarily map 1-to-1 to 
> components.  I think there are only two constraints:
>  - any data set (including schemas) that is not local to the 
> 'integrator' must be web-accessible
>  - the view display must occur at the client node
>
> As best I can tell, the first four functions apply equally well to 
> Semantic Web architectures.  The only problem is that the SemWeb 
> apostles assume that the 'integrator' and 'view development' functions 
> are implemented over a few well-defined reasoning technologies, notably 
> the OWL extended description logics.  And the current state of the art 
> is that the conversion of source data to reference form requires static 
> markup.  Thus:
>
>   
>> EB> The idea of the Semantic Web technologies is that they are
>>  > supporting technologies for any of several such architectures.
>>  > They require some agent to markup the...
>>
>> JS> Yes.  They sweep all the hard stuff under the rug -- i.e., they
>> leave it to some external "agent", whose semantics is outside
>> the specifications and recommendations of the W3C.
>>   
>>     
>
> But AJAX does the same.  It tells us nothing whatever about how to build 
> an adapter -- neither what the source syntax might be, nor what the 
> reference form might be, beyond that it is XML, which is only slightly 
> better than defining a character set. 
>
> The AJAX idea is that adapters can do the conversion or markup on the 
> fly, and that usually requires a fairly simple source syntax and no 
> requirement for the adapter to process the source data as a body.  That 
> is, "the hard stuff" isn't that hard. John's point is that there is a 
> lot of web-accessible data like that (and a lot more that could be), and 
> the adapters can also filter for a target application.  The Sem Web 
> technology is really designed for situations in which the source syntax 
> is complex (like natural language) and may need to be processed as a 
> body (a text corpus) in order to get proper markup.
>
> What I was trying to do, however clumsily, was to tease out the 
> requirements, the assumptions, and the net architectural differences in 
> what John sees as 'complementary technologies'.  I think we are finally 
> getting there.
>
>   
>> As I've said many times, the SemWeb is too provincial.  Other
>> people have said that it suffers from a Not-Invented-Here syndrome.
>> They've carved out a little niche and ignored what goes on in
>> all the hardware and software that pump out web pages.
>>   
>>     
>
> That is all true.  But at the same time, they are trying to do "the hard 
> stuff", and that is something new.  It is not just recyclying 1980s 
> distributed data technologies using Java and XML.  There is a lot to be 
> said for reengineering technologies that work.  They aren't 
> breakthroughs, but they can have enormous impact.  As Jared Diamond 
> observed, it was the _re_invention of the wheel by people who had 
> domesticated draft animals that made the difference. 
>
> The SemWeb vision is not 'provincial' -- there is even more text out 
> there than simply structured data, and the SemWeb is a means of 
> improving its accessibility as information, either as markup or as an 
> interpretable rendering.  The problem with the SemWeb is that it is an 
> early wheel, and the draft animals are human, so all we can make is 
> wheelbarrows.  But we are beginning to see rickshaws, and perhaps we 
> will have the breakthrough that bypasses the horse for the electric motor.
>
>   
>> Bottom line:  The semantics of the Web is intimately connected
>> with the semantics of every system connected to the Web.  You
>> can't have a web-only semantics or a web-only science.
>>   
>>     
>
> Amen. 
>
> But the converse is not true.  The semantics of systems connected to the 
> Web is not necessarily intimately connected to the "semantics of the 
> Web"; many looser couplings are possible.  I would in fact argue that 
> Microsoft's effort to make the system-to-Internet relationship seamless 
> was a conceptual mistake (not just a technical mess).  I don't come from 
> New England, and I know Robert Frost used the adage in irony, but I 
> believe that "good fences make good neighbors".
>
> -Ed
>
>   
John / Ed,    (01)

Maybe we summarize as follows:    (02)

1. We want smart and localized data processing capability as a feature 
of Web User Agents like browers
2. We want multiple representations of structured data -- which is on 
the rise now that the EAV graph model underpinnings of OData, GData, and 
RDF based Linked Data are becoming clearer
3. Network oriented Data Objects should have resolvable Identifiers; via 
these Identifiers we can de-reference their structured representations.    (03)


--     (04)

Regards,    (05)

Kingsley Idehen       
President & CEO 
OpenLink Software     
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen     (06)






_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (07)

<Prev in Thread] Current Thread [Next in Thread>