ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Current Semantic Web Layer pizza (was ckae)

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Dennis Thomas <dlthomas@xxxxxxxxxxxx>
Date: Sun, 16 Sep 2007 16:00:39 -0700
Message-id: <D51FEE5E-7D20-40BC-B69E-07237DC26684@xxxxxxxxxxxx>
Hello Bill,

Yes, I am aware that precomputation is being used.  It makes sense.  I am not aware of the full extent of its use however, or the various situations where it is be applied.   I appreciate the link to memoization and will take a look.   

I brought the concepts of declarative and pre-computation up to help distinguish our approach to others, but by no means believe that those two attributes make our product unique by themselves.

I appreciate the feedback.  

Dennis  


On Sep 16, 2007, at 3:20 PM, Bill Andersen wrote:




[DLT] When a command is executed, an algorithm conducts a search of a data source generated by a database, application, utility, etc.   It then pulls the requested data into memory for processing.  The CPU grabs the data and compiles the required response, then outputs the result to be displayed.  Every time a command is executed, the same process is repeated, and the same data retrieved every time it is requested - over and over again.  You know, like garlic pizza (formerly garlic cake).  Add RDF.  Add OWL.  Add, add, add and you have a multi-crust pizza with built-in layers of inefficiency. 

Dennis,

You are aware, aren't you, that many of the inference engines implemented for semantic web languages do precomputation exactly as you claim that your unique product does?   Database and other systems also do this and whether or not one does it often involves an engineering tradeoff between the workload the system is under and required query / update performance.  Typically, if your queries need to go really fast, then you pay a higher price at update to precompute things likely to be needed by expected queries.  Contrariwise, if your updates need to be fast (lots of real time data, say) then you can do that fast, but not with a bunch of precomputation.

As I mentioned in my last note, there is no groundbreaking discovery here.  It is called memoization / caching / precomputation and any undergraduate in computer science would be expected to know it.  

Here's an excerpt from http://en.wikipedia.org/wiki/Memoization ...

The term "memoization" was coined by Donald Michie in 1968 [1] and is derived from the Latin word memorandum (to be remembered), and thus carries the meaning of turning [the results of] a function into something to be remembered. While memoization might be confused with memorization (because of the shared cognate), memoization has a specialized meaning in computing.

See also

http://en.wikipedia.org/wiki/Caching



_________________________________________________________________



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>