ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Current Semantic Web Layer pizza (was ckae)

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Bill Andersen <andersen@xxxxxxxxxxxxxxxxx>
Date: Sun, 16 Sep 2007 19:28:50 -0400
Message-id: <B15D4357-42FA-4BE4-AF2A-78CF2CEC9EF9@xxxxxxxxxxxxxxxxx>
+1

I forgot "lemmaization" in my list.

On Sep 16, 2007, at 18:44 , Randall R Schulz wrote:

On Sunday 16 September 2007 15:20, Bill Andersen wrote:
[DLT] When a command is executed, an algorithm conducts a search of
a data source generated by a database, application, utility, etc.
It then pulls the requested data into memory for processing. ...

Dennis,

You are aware, aren't you, that many of the inference engines
implemented for semantic web languages do precomputation exactly as
you claim that your unique product does?   Database and other systems
also do this and whether or not one does it often involves an
engineering tradeoff between the workload the system is under and
required query / update performance.  Typically, if your queries need
to go really fast, then you pay a higher price at update to
precompute things likely to be needed by expected queries.
Contrariwise, if your updates need to be fast (lots of real time
data, say) then you can do that fast, but not with a bunch of
precomputation.

In Cyc implicational axioms are marked "forward" or "reverse." Forward 
rules are used to pre-compute ground consequences inferable via that 
rule. Updates are done incrementally when other KB content is changed 
(added, removed, modified) that may interact with any forward axioms 
(Cyc's content indexing is extensive). Such pre-computed consequence 
atoms are distinctly flagged in the Cyc UI.

Backward axioms are only active during ordinary query-driven inference.

Other more conventional theorem provers have "lemmaization" mechanisms 
that retain intermediate inferred formulas (typically in clausal form) 
to prevent expensive re-derivation of those formulas.


As I mentioned in my last note, there is no groundbreaking discovery
here.  It is called memoization / caching / precomputation and any
undergraduate in computer science would be expected to know it.

It's got to be one of the most widely used general concepts in 
performance improvement in systems that include expensive computations 
of values that are repeatedly requested. It generally comes with a 
concommitant space tradeoff, to record the cached / memoized / 
lemmaized / etc. results.

Operating system buffers pools are another form of this. Given the huge 
disparity between disk and RAM speeds (and between CPU and RAM speeds), 
these various caches are all worth the complexity and additional 
resource costs they introduce.

The technique also shows up in computing hardware designs, where there 
may be multiple levels of RAM cache between the CPU and the main store, 
where virtual memory page tables may include a "translation lookaside 
buffer" (TLB) or in disk drives where whole tracks are cached.


And on and on!


...


Randall Schulz

_________________________________________________________________


Bill Andersen (andersen@xxxxxxxxxxxxxxxxx)

Chief Scientist

Ontology Works, Inc. (www.ontologyworks.com)

3600 O'Donnell Street, Suite 600

Baltimore, MD 21224

Office: 410-675-1201

Cell: 443-858-6444




_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>