On Sat, Sep 15, 2007 at 09:28:03PM -0700, Dennis L. Thomas wrote:
> I think current technologies will be with us for few more decades, but
> I am certain it will change whether we want it to or not. There are
> too many knowledgeable minds pounding away on the complexity issue,
> making it probable that a new paradigm of computing is just over the
> horizon. (01)
What "complexity issue" do you have in mind? You suggest that the
"complexity issue" is some sort of open problem waiting to be solved,
like Fermat's Last Theorem prior to Andrew Wiles or the structure of DNA
before Crick and Watson, and moreover that it (whatever it is) is on the
verge of a solution. Again, I don't know what you mean by "the
complexity issue", but I don't know of anything that fits the bill save
perhaps the P=NP problem -- which most theoreticians agree is likely to
be solved (if it ever is) in a way that simply confirms the intrinsic
intractability of automated reasoning suggested by decades of pure and
applied research. The fact is that most interesting reasoning problems
are at least NP-complete, if they are decidable at all. The game, in
that regard, is over. It is not a problem to be solved, it is an
intrinsic, insurmountable limitation on digital computation. No one is
*ever* going to get past it (short of the development of quantum
computing, perhaps), and anyone suggesting otherwise either doesn't
understand basic computer science or is selling snake oil. (02)
> We think that new paradigm is knowledge computing. It's declarative (03)
As is every logic-based paradigm. (04)
> (pre-computed). (05)
Huh? You mean simply Really Big knowledge repositories? Isn't that
what ontologies are supposed to be? (06)
> It will co-exist with the "information-based" procedural world. (07)
What is that supposed to mean? There is no intrinsic connection between
information and the procedural paradigm. (08)
> When I refer to structural limitations, I am referring to the
> limitations of tables and fields (relational or object oriented), and
> to the quadric complexity (09)
Quadratic? (010)
> resulting from indexing - the primary reasons systems cannot scale.
> The work-around is the Semantic Web Layer Pizza (was cake), which adds
> more layers of structure, complexity and burden on an already costly
> procedural process. Every time a command is executed, the system
> computes the same data over and over, ad infinitum. (011)
I don't understand. When a command is executed, well, the command is
executed and that is that. What data are computed over and over again
"every time a command is executed"? (012)
> The cost is time, redundancy, excess equipment, facilities, utilities,
> inefficiency and so on. After more than seven plus decades, do we
> really need our computer systems to tell us that 2 + 2 = 4. We know
> that, and we know a lot more. Our computer systems should be able to
> give us precisely the "knowledge" we need, when we need it, wherever
> we are. (013)
Right -- again, that's the idea with ontologies. You create large
knowledge repositories on the web; if what you need is already
explicitly there, great; if not, you draw upon information that *is*
there are use automated reasoning to (hopefully) derive the knowledge
you need. You aren't suggesting that the reasoning component of the
picture is somehow unnecessary, are you? Is that what you mean when you
talk about knowledge that is "pre-computed"? That's the only sense I
can make out of your vague and titillating suggestions. There is also
not the remotest chance that this suggestion is correct. (014)
> This includes making very complex decisions about most strategic,
> mission trade-off or legally defensible situations. Or perhaps, the
> simulation of all of civilization's knowledge. (015)
Sounds to me like the sort of talk that defunded AI in the 80s. (016)
> ...
> I appreciate your working explanation of how the brain functions. It
> is our understanding that human DNA has an estimated 5 billion neural
> bits of power, each bit with decision capacity - albeit yes/no. When
> the storage limits of DNA reached the point of diminishing returns
> (higher life forms could not evolve), nature gave us the brain. The
> human brain is estimated to have a storage capacity of 10^14 or
> 100,000,000,000,000 neural bits of memory and decision power. If
> computers are bit oriented like the human brain (granted, machines
> lack that certain chemical something), how can they be taught to KNOW
> like humans and later, how to LEARN like people to solve problems on
> their own? (017)
So your idea is to build a big brain simulation? (018)
> It is interesting to us that after more than 17 years, the Knowledge
> Management industry has yet to define what knowledge is? (019)
You appear to be completely unfamiliar with the huge literature on
knowledge and belief in AI, automated reasoning, cognitive science,
learning theory, etc that has accumulated over the past 50 years. (020)
> Richard Ballard has worked on this problem for more than two decades. (021)
He and several thousand others. (022)
> His answer is: KNOWLEDGE = THEORY + INFORMATION. (023)
Well, the *start* of an answer, maybe; as is, it's just a catchy slogan. (024)
> Theory gives meaning to data (information), without it, there is no
> meaning. Information (data/reality) is the who, what, when, where and
> how much facts of situations and circumstances. Theory answers our
> how, why and what if questions about those situations and
> circumstances. Theory is predictive and last for decades, centuries
> and millenniums. No theory, no meaning. We learn theory through
> enculturation, education, life experience and analytical thinking. (025)
The explanatory, predictive, and semantic roles of theory have been
analyzed, argued, and discussed in great depth and detail by scientists
and philosophers of science since at least the late 19th century,
beginning notably with the work of Pierre Duhem. You might start here
for some background: (026)
http://plato.stanford.edu/entries/scientific-explanation (027)
A seminal work on the topic is the collection _The Structure of
Scientific Theories_: http://tinyurl.com/2v7nfg (028)
Also recommended along these lines (picking more or less randomly among
many good possibilities) is Frederick Suppes' _The Semantic Conception
of Theories and Scientific Realism_: http://tinyurl.com/3ahuzg (029)
> It is our contention, based on the knowledge formula, that machines
> can simulate every form of of human knowledge and reason with that
> knowledge like people. (030)
That's great. If you can refer us to refereed publications, open source
projects, and free and downloadable work in progress, it would be truly
useful to see how all of these grand claims are actually fleshed out. (031)
Chris Menzel (032)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (033)
|