ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] How de facto standards are created

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Melvin Carvalho <melvincarvalho@xxxxxxxxx>
Date: Tue, 18 Jun 2013 17:13:12 +0200
Message-id: <CAKaEYhLmYPaJhBHMNZJRpmCTJNqx0j-qgeYdUJ3SVjDxTLKBBw@xxxxxxxxxxxxxx>



On 17 June 2013 15:32, John F Sowa <sowa@xxxxxxxxxxx> wrote:
One of the problems of the Semantic Web and of ontology projects
in general is that the official standards have an extremely slow
adoption rate.  By comparison,

  1. As soon as Tim B-L and his small group of implementers developed
     the WWW as a means of sharing research papers, physicists at
     every university and R & D center in the world adopted it.
     Academics in other fields of science and engineering followed.

  2. When the Mozilla project at the U. of Illinois implemented a browser
     that integrated pictures with text, it became an instant hit. Early
     adopters told their friends, and everybody who was connected to the
     Internet downloaded it.  Commercial companies saw the adoption
     rate and followed quickly.

  3. The incompatibilities of _javascript_ among vendors meant that
     developers could not design complex code that would run on multiple
     browsers -- even on different versions from the same vendor.  Then
     ECMAScript harmonized the many versions, and the vendors adopted it.
     But very few developers chose to use the more complex features.

  4. Then Google developed a dynamic way of using _javascript_ in Gmail
     and Google Maps, and Jesse James Garrett gave it the catchy name
     AJAX (Asynchronous _javascript_ And XML) in 2005:
     http://www.adaptivepath.com/ideas/ajax-new-approach-web-applications
     Then the adoption rate by developers grew exponentially.

Points #1 and #2 show that de facto standards result from "killer apps"
that are rapidly adopted and imitated.  The W3C was created four years
*after* Tim B-L released his original software.  Point #3 shows that
official organizations have an important role to play.  But point #4
confirms the fact that a "killer app" is necessary to get attention.

Last week, an article discussed the role that Apple is playing in
getting attention for or against proposed standards:

    http://techcrunch.com/2013/06/12/nfc/

Some excerpts:

> Near Field Communications’ evangelists have been trying to get smartphone
> owners to share stuff by bumping and grinding their phones for years. And
> progress has been painful, to put it mildly.

That result is typical for a "proactive" standard that is not based on
an earlier de facto standard.

> The latest setback for the NFC-pushers’ cause comes courtesy of Apple.
> During Monday’s WWDC keynote, Tim Cook & Co. were cracking jokes at the
> tech’s expense as they previewed a feature coming in iOS 7 that does
> the job of NFC without any of the awkwardness of NFC...
>
> Instead, it’s adding AirDrop to iOS 7, which uses peer-to-peer Wi-Fi
> to allow content to be shared to nearby iOS 7 devices without having
> to physically tap anything together...
> “No need to wander around the room bumping your phone.”

Summary:

> Apple often talks about how the things it chooses *not* to do are as
> defining as the things it does. Well Apple doesn’t do NFC. And that
> speaks volumes. Don’t forget, NFC is not new. It’s been kicking around
> in phones since forever. And Apple still reckons it sucks.

Historical note:  After leaving Cyc, Guha went to Apple, where he
designed the first version of what became RDF.  But Apple did not
adopt it for any products.  Then Guha went to Netscape, where he
worked with Tim Bray to develop the XML-based version, which the
W3C adopted.

During the 2000s, Nokia poured millions of euros into R & D for RDF,
OWL, and other technology based on Semantic Web standards.  But Apple
ignored the SW.  So did Google, Microsoft, etc.


I just came across these nuggets:

RFC 1958        Architectural Principles of the Internet       June 1996

3. General Design Issues

   3.1 Heterogeneity is inevitable and must be supported by design.
   Multiple types of hardware must be allowed for, e.g. transmission
   speeds differing by at least 7 orders of magnitude, various computer
   word lengths, and hosts ranging from memory-starved microprocessors
   up to massively parallel supercomputers. Multiple types of
   application protocol must be allowed for, ranging from the simplest
   such as remote login up to the most complex such as distributed
   databases.

   3.2 If there are several ways of doing the same thing, choose one.
   If a previous design, in the Internet context or elsewhere, has
   successfully solved the same problem, choose the same solution unless
   there is a good technical reason not to.  Duplication of the same
   protocol functionality should be avoided as far as possible, without
   of course using this argument to reject improvements.

   3.3 All designs must scale readily to very many nodes per site and to
   many millions of sites.

   3.4 Performance and cost must be considered as well as functionality.

   3.5 Keep it simple. When in doubt during design, choose the simplest
   solution.

   3.6 Modularity is good. If you can keep things separate, do so.

   3.7 In many cases it is better to adopt an almost complete solution
   now, rather than to wait until a perfect solution can be found.

   3.8 Avoid options and parameters whenever possible.  Any options and
   parameters should be configured or negotiated dynamically rather than
   manually.

   3.9 Be strict when sending and tolerant when receiving.
   Implementations must follow specifications precisely when sending to
   the network, and tolerate faulty input from the network. When in
   doubt, discard faulty input silently, without returning an error
   message unless this is required by the specification.

   3.10 Be parsimonious with unsolicited packets, especially multicasts
   and broadcasts.

   3.11 Circular dependencies must be avoided.

      For example, routing must not depend on look-ups in the Domain
      Name System (DNS), since the updating of DNS servers depends on
      successful routing.

   3.12 Objects should be self decribing (include type and size), within
   reasonable limits. Only type codes and other magic numbers assigned
   by the Internet Assigned Numbers Authority (IANA) may be used.

   3.13 All specifications should use the same terminology and notation,
   and the same bit- and byte-order convention.

   3.14 And perhaps most important: Nothing gets standardised until
   there are multiple instances of running code.
ftp://ftp.isi.edu/in-notes/rfc1958.txt
 

John

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>