[DLT] When a command is executed, an algorithm conducts a search of a data source generated by a database, application, utility, etc. It then pulls the requested data into memory for processing. The CPU grabs the data and compiles the required response, then outputs the result to be displayed. Every time a command is executed, the same process is repeated, and the same data retrieved every time it is requested - over and over again. You know, like garlic pizza (formerly garlic cake). Add RDF. Add OWL. Add, add, add and you have a multi-crust pizza with built-in layers of inefficiency.
You are aware, aren't you, that many of the inference engines implemented for semantic web languages do precomputation exactly as you claim that your unique product does? Database and other systems also do this and whether or not one does it often involves an engineering tradeoff between the workload the system is under and required query / update performance. Typically, if your queries need to go really fast, then you pay a higher price at update to precompute things likely to be needed by expected queries. Contrariwise, if your updates need to be fast (lots of real time data, say) then you can do that fast, but not with a bunch of precomputation.
As I mentioned in my last note, there is no groundbreaking discovery here. It is called memoization / caching / precomputation and any undergraduate in computer science would be expected to know it.
The term "memoization" was coined by Donald Michie in 1968  and is derived from the Latin word memorandum (to be remembered), and thus carries the meaning of turning [the results of] a function into something to be remembered. While memoization might be confused with memorization (because of the shared cognate), memoization has a specialized meaning in computing.