On Sunday 16 September 2007 15:20, Bill Andersen wrote:
[DLT] When a command is executed, an algorithm conducts a search of
a data source generated by a database, application, utility, etc.
It then pulls the requested data into memory for processing. ...
Dennis,
You are aware, aren't you, that many of the inference engines
implemented for semantic web languages do precomputation exactly as
you claim that your unique product does? Database and other systems
also do this and whether or not one does it often involves an
engineering tradeoff between the workload the system is under and
required query / update performance. Typically, if your queries need
to go really fast, then you pay a higher price at update to
precompute things likely to be needed by expected queries.
Contrariwise, if your updates need to be fast (lots of real time
data, say) then you can do that fast, but not with a bunch of
precomputation.
In Cyc implicational axioms are marked "forward" or "reverse." Forward
rules are used to pre-compute ground consequences inferable via that
rule. Updates are done incrementally when other KB content is changed
(added, removed, modified) that may interact with any forward axioms
(Cyc's content indexing is extensive). Such pre-computed consequence
atoms are distinctly flagged in the Cyc UI.
Backward axioms are only active during ordinary query-driven inference.
Other more conventional theorem provers have "lemmaization" mechanisms
that retain intermediate inferred formulas (typically in clausal form)
to prevent expensive re-derivation of those formulas.
As I mentioned in my last note, there is no groundbreaking discovery
here. It is called memoization / caching / precomputation and any
undergraduate in computer science would be expected to know it.
It's got to be one of the most widely used general concepts in
performance improvement in systems that include expensive computations
of values that are repeatedly requested. It generally comes with a
concommitant space tradeoff, to record the cached / memoized /
lemmaized / etc. results.
Operating system buffers pools are another form of this. Given the huge
disparity between disk and RAM speeds (and between CPU and RAM speeds),
these various caches are all worth the complexity and additional
resource costs they introduce.
The technique also shows up in computing hardware designs, where there
may be multiple levels of RAM cache between the CPU and the main store,
where virtual memory page tables may include a "translation lookaside
buffer" (TLB) or in disk drives where whole tracks are cached.
And on and on!
...
Randall Schulz
_________________________________________________________________