ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Current Semantic Web Layer pizza (was ckae)

To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Mills Davis <lmd@xxxxxxxxxxxxxx>
Date: Sun, 16 Sep 2007 19:32:07 -0400
Message-id: <B976819E-DE1D-45A5-97D3-D33E58C82823@xxxxxxxxxxxxxx>
Randall, 

1. In rules engine design, aren't compilation techniques being employed to accelerate reasoning by combining and pre-compiling conditionals to create single expressions for evaluation (rules firing) that minimize system synchronization and state changes? 

2. It seems that one of the points that Dennis is making is that the cost trade-off between algorithmic (re)execution and pre-computation now favors more extensive use of pre-computation for purposes of subsequently having declarative look-ups across n-ary expressions.  I'd be interested in Cyc's view of such trade-offs. If a key issue is the ability to process  information, knowledge and logic at scale (i.e., massive amounts, intricate organization, and forms of logic that include conflict, uncertainty, and value-based reasoning), with good performance, then it seems to me that the strategy of maximizing the amount of knowledge in declarative form (accessible via n-ary lookup, as Dennis says) would be well worth exploring and could probably lead to new categories of hardware.

3.  But, another issue that is nagging me is concerns the advantages of logical symmetry and sequence neutrality. Why is a processing distinction made between forward and backward axioms? Wouldn't the impact of all possible informational variables be (pre)computable at the time the backward/forward axiom was formulated? I guess what I'm probing is the notion of symmetry of _expression_.  If we view knowledge as a constraint space of conditional and unconditional expressions (rational paths) containing both dependent and independent variables, then isn't it possible to reason over the same mesh by changing what we consider independent and dependent, and thus address different types of questions? For example, I see this idea coming up recently in the discussion of "continuous search" and "perpetual analytics.."


Mills


On Sep 16, 2007, at 6:44 PM, Randall R Schulz wrote:

On Sunday 16 September 2007 15:20, Bill Andersen wrote:
[DLT] When a command is executed, an algorithm conducts a search of
a data source generated by a database, application, utility, etc.
It then pulls the requested data into memory for processing. ...

Dennis,

You are aware, aren't you, that many of the inference engines
implemented for semantic web languages do precomputation exactly as
you claim that your unique product does?   Database and other systems
also do this and whether or not one does it often involves an
engineering tradeoff between the workload the system is under and
required query / update performance.  Typically, if your queries need
to go really fast, then you pay a higher price at update to
precompute things likely to be needed by expected queries.
Contrariwise, if your updates need to be fast (lots of real time
data, say) then you can do that fast, but not with a bunch of
precomputation.

In Cyc implicational axioms are marked "forward" or "reverse." Forward 
rules are used to pre-compute ground consequences inferable via that 
rule. Updates are done incrementally when other KB content is changed 
(added, removed, modified) that may interact with any forward axioms 
(Cyc's content indexing is extensive). Such pre-computed consequence 
atoms are distinctly flagged in the Cyc UI.

Backward axioms are only active during ordinary query-driven inference.

Other more conventional theorem provers have "lemmaization" mechanisms 
that retain intermediate inferred formulas (typically in clausal form) 
to prevent expensive re-derivation of those formulas.


As I mentioned in my last note, there is no groundbreaking discovery
here.  It is called memoization / caching / precomputation and any
undergraduate in computer science would be expected to know it.

It's got to be one of the most widely used general concepts in 
performance improvement in systems that include expensive computations 
of values that are repeatedly requested. It generally comes with a 
concommitant space tradeoff, to record the cached / memoized / 
lemmaized / etc. results.

Operating system buffers pools are another form of this. Given the huge 
disparity between disk and RAM speeds (and between CPU and RAM speeds), 
these various caches are all worth the complexity and additional 
resource costs they introduce.

The technique also shows up in computing hardware designs, where there 
may be multiple levels of RAM cache between the CPU and the main store, 
where virtual memory page tables may include a "translation lookaside 
buffer" (TLB) or in disk drives where whole tracks are cached.


And on and on!


...


Randall Schulz

_________________________________________________________________



Mills Davis
Managing Director
Project10X
202-667-6400
202-255-6655 cel
1-800-713-8049 fax




_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>