ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Current Semantic Web Layer pizza (was ckae)

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Randall R Schulz <rschulz@xxxxxxxxx>
Date: Sun, 16 Sep 2007 15:44:37 -0700
Message-id: <200709161544.37142.rschulz@xxxxxxxxx>
On Sunday 16 September 2007 15:20, Bill Andersen wrote:
> > [DLT] When a command is executed, an algorithm conducts a search of
> > a data source generated by a database, application, utility, etc.
> > It then pulls the requested data into memory for processing. ...
>
> Dennis,
>
> You are aware, aren't you, that many of the inference engines
> implemented for semantic web languages do precomputation exactly as
> you claim that your unique product does?   Database and other systems
> also do this and whether or not one does it often involves an
> engineering tradeoff between the workload the system is under and
> required query / update performance.  Typically, if your queries need
> to go really fast, then you pay a higher price at update to
> precompute things likely to be needed by expected queries.
> Contrariwise, if your updates need to be fast (lots of real time
> data, say) then you can do that fast, but not with a bunch of
> precomputation.    (01)

In Cyc implicational axioms are marked "forward" or "reverse." Forward 
rules are used to pre-compute ground consequences inferable via that 
rule. Updates are done incrementally when other KB content is changed 
(added, removed, modified) that may interact with any forward axioms 
(Cyc's content indexing is extensive). Such pre-computed consequence 
atoms are distinctly flagged in the Cyc UI.    (02)

Backward axioms are only active during ordinary query-driven inference.    (03)

Other more conventional theorem provers have "lemmaization" mechanisms 
that retain intermediate inferred formulas (typically in clausal form) 
to prevent expensive re-derivation of those formulas.    (04)


> As I mentioned in my last note, there is no groundbreaking discovery
> here.  It is called memoization / caching / precomputation and any
> undergraduate in computer science would be expected to know it.    (05)

It's got to be one of the most widely used general concepts in 
performance improvement in systems that include expensive computations 
of values that are repeatedly requested. It generally comes with a 
concommitant space tradeoff, to record the cached / memoized / 
lemmaized / etc. results.    (06)

Operating system buffers pools are another form of this. Given the huge 
disparity between disk and RAM speeds (and between CPU and RAM speeds), 
these various caches are all worth the complexity and additional 
resource costs they introduce.    (07)

The technique also shows up in computing hardware designs, where there 
may be multiple levels of RAM cache between the CPU and the main store, 
where virtual memory page tables may include a "translation lookaside 
buffer" (TLB) or in disk drives where whole tracks are cached.    (08)


And on and on!    (09)


> ...    (010)


Randall Schulz    (011)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (012)

<Prev in Thread] Current Thread [Next in Thread>