On Sep 16, 2007, at 11:16 AM, Chris Menzel wrote:
On Sat, Sep 15, 2007 at 09:28:03PM -0700, Dennis L. Thomas wrote:
I think current technologies will be with us for few more decades, but
I am certain it will change whether we want it to or not. There are
too many knowledgeable minds pounding away on the complexity issue,
making it probable that a new paradigm of computing is just over the
horizon.
What "complexity issue" do you have in mind? You suggest that the
"complexity issue" is some sort of open problem waiting to be solved,
like Fermat's Last Theorem prior to Andrew Wiles or the structure of DNA
before Crick and Watson, and moreover that it (whatever it is) is on the
verge of a solution. Again, I don't know what you mean by "the
complexity issue", but I don't know of anything that fits the bill save
perhaps the P=NP problem -- which most theoreticians agree is likely to
be solved (if it ever is) in a way that simply confirms the intrinsic
intractability of automated reasoning suggested by decades of pure and
applied research. The fact is that most interesting reasoning problems
are at least NP-complete, if they are decidable at all. The game, in
that regard, is over. It is not a problem to be solved, it is an
intrinsic, insurmountable limitation on digital computation. No one is
*ever* going to get past it (short of the development of quantum
computing, perhaps), and anyone suggesting otherwise either doesn't
understand basic computer science or is selling snake oil.
[DLT] It is my personal experience to have talked with numerous developers/integrators who are confronted with the complexity problem everyday. I believe them when they tell me they wish they had an economically efficient solution to manage 10s of 1000s of manufacturers with 10s to 100s of products all with a different name, product number and pricing, who distribute to 1,000s of distributors with their own names and product numbers, who sell to 10's of 1000s of doctors, hospitals and clinics who use their own terms and product numbers, who distribute these products to millions of patients. Not to mention the insurance companies who require that their "authorized" codes be processed with their invoices before the doctors, hospitals and clinics are paid. Did I mention sold out, discontinued, new and discounted requirements. I will be sure to advise these people that the complexity problem has been solved the next time I talk with them.
We think that new paradigm is knowledge computing. It's declarative
As is every logic-based paradigm.
(pre-computed).
Huh? You mean simply Really Big knowledge repositories? Isn't that
what ontologies are supposed to be?
[DLT] I fully expect that the first knowledgebase products will be modest, geared mostly to job functions, projects and retail knowledge products. Thereafter, I think the major publishers will recognize that they can package the knowledge they own into a new form that has interactive, problem solving, and economic value. They are under siege by the Internet. Knowledge simulation offers a means to not only regain lost market share, but to actually control the ontologies used in the marketplace. We call this knowledge dominance. Once developers have paid the initial cost of developing their knowledge products, they can license those products to others just as software is licensed and sold today. This is an open market process.
It will co-exist with the "information-based" procedural world.
What is that supposed to mean? There is no intrinsic connection between
information and the procedural paradigm.
[DLT] From our perspective, the existing procedural world is all about information (who, what, when, where and how much), not about knowledge. This, of course, is in the process of changing. It could have been worded better.
When I refer to structural limitations, I am referring to the
limitations of tables and fields (relational or object oriented), and
to the quadric complexity
Quadratic?
[DLT] As defined by Michael Lamont, a software engineer: "Algorithmic complexity is generally written in a form known as Big-O notation, where the O represents the complexity of the algorithm and a value n represents the size of the set the algorithm is run against. For example, O(n) means that an algorithm has a linear complexity. In other words, it takes ten times longer to operate on a set of 100 items than it does on a set of 10 items (10 * 10 = 100). If the complexity was O(n2) (quadratic complexity), then it would take 100 times longer to operate on a set of 100 items than it does on a set of 10 items."
resulting from indexing - the primary reasons systems cannot scale.
The work-around is the Semantic Web Layer Pizza (was cake), which adds
more layers of structure, complexity and burden on an already costly
procedural process. Every time a command is executed, the system
computes the same data over and over, ad infinitum.
I don't understand. When a command is executed, well, the command is
executed and that is that. What data are computed over and over again
"every time a command is executed"?
[DLT] When a command is executed, an algorithm conducts a search of a data source generated by a database, application, utility, etc. It then pulls the requested data into memory for processing. The CPU grabs the data and compiles the required response, then outputs the result to be displayed. Every time a command is executed, the same process is repeated, and the same data retrieved every time it is requested - over and over again. You know, like garlic pizza (formerly garlic cake). Add RDF. Add OWL. Add, add, add and you have a multi-crust pizza with built-in layers of inefficiency.
A declarative system as discussed in a recent email, does not require this type of processing because it is processed once, stored, then looked-up. The efficiency of declarative is superior to procedural for many purposes. You can bet your last dollar that Intel and other chip makers are looking into declarative chips right now. They may now know how they will function, but they know that their coming.
I am not a computer scientist or programmer. I am sure there are others out there who can explain this with far greater clarity than I have.
The cost is time, redundancy, excess equipment, facilities, utilities,
inefficiency and so on. After more than seven plus decades, do we
really need our computer systems to tell us that 2 + 2 = 4. We know
that, and we know a lot more. Our computer systems should be able to
give us precisely the "knowledge" we need, when we need it, wherever
we are.
Right -- again, that's the idea with ontologies. You create large
knowledge repositories on the web; if what you need is already
explicitly there, great; if not, you draw upon information that *is*
there are use automated reasoning to (hopefully) derive the knowledge
you need. You aren't suggesting that the reasoning component of the
picture is somehow unnecessary, are you? Is that what you mean when you
talk about knowledge that is "pre-computed"? That's the only sense I
can make out of your vague and titillating suggestions. There is also
not the remotest chance that this suggestion is correct.
[DLT] Yes, I am suggesting that the reasoning component is not necessary, because every rational path is organized according to the well-justified theory that gives every concept, idea and thought pattern within the knowledgebase its meaning. As to "the remotest chance that this suggestion is correct." Here we stand.
This includes making very complex decisions about most strategic,
mission trade-off or legally defensible situations. Or perhaps, the
simulation of all of civilization's knowledge.
Sounds to me like the sort of talk that defunded AI in the 80s.
[DLT] Richard Ballard quickly determined that AI was DOA at the onset of the Star Wars Project. 100% logic-based systems do not reason with uncertainty. The question then became, what will?
...
I appreciate your working explanation of how the brain functions. It
is our understanding that human DNA has an estimated 5 billion neural
bits of power, each bit with decision capacity - albeit yes/no. When
the storage limits of DNA reached the point of diminishing returns
(higher life forms could not evolve), nature gave us the brain. The
human brain is estimated to have a storage capacity of 10^14 or
100,000,000,000,000 neural bits of memory and decision power. If
computers are bit oriented like the human brain (granted, machines
lack that certain chemical something), how can they be taught to KNOW
like humans and later, how to LEARN like people to solve problems on
their own?
So your idea is to build a big brain simulation?
[DLT] We are a tool company, not a service company. We expect that on the merits of the technology, others will do this on their own, piece by piece. We are hopeful that you and A&M will take on the challenge of simulating the knowledge of philosophy.
It is interesting to us that after more than 17 years, the Knowledge
Management industry has yet to define what knowledge is?
You appear to be completely unfamiliar with the huge literature on
knowledge and belief in AI, automated reasoning, cognitive science,
learning theory, etc that has accumulated over the past 50 years.
[DLT ] Not completely, but certainly to a large extent. I willingly admit to many short comings academically. That has been one of the draws of this forum.
Richard Ballard has worked on this problem for more than two decades.
He and several thousand others.
[DLT] Yes, that is so. Thankfully, he has given up on forum writing and is fully focused on architecture and programming. As an experimental physicist, he expects to produce tangible results. As a businessman, my expectation is that he will.
His answer is: KNOWLEDGE = THEORY + INFORMATION.
Well, the *start* of an answer, maybe; as is, it's just a catchy slogan.
[DLT] I would be interested in your "personal" idea of what knowledge is. No quotes, no references, just original thought.
Theory gives meaning to data (information), without it, there is no
meaning. Information (data/reality) is the who, what, when, where and
how much facts of situations and circumstances. Theory answers our
how, why and what if questions about those situations and
circumstances. Theory is predictive and last for decades, centuries
and millenniums. No theory, no meaning. We learn theory through
enculturation, education, life experience and analytical thinking.
The explanatory, predictive, and semantic roles of theory have been
analyzed, argued, and discussed in great depth and detail by scientists
and philosophers of science since at least the late 19th century,
beginning notably with the work of Pierre Duhem. You might start here
for some background:
A seminal work on the topic is the collection _The Structure of
Also recommended along these lines (picking more or less randomly among
many good possibilities) is Frederick Suppes' _The Semantic Conception
It is our contention, based on the knowledge formula, that machines
can simulate every form of of human knowledge and reason with that
knowledge like people.
That's great. If you can refer us to refereed publications, open source
projects, and free and downloadable work in progress, it would be truly
useful to see how all of these grand claims are actually fleshed out.
Chris Menzel
Dennis Thomas
_________________________________________________________________