ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Current Semantic Web Layer pizza (was ckae)

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Dennis L. Thomas" <DLThomas@xxxxxxxxxxxxxxxxxxxxxxxx>
Date: Sun, 16 Sep 2007 12:08:26 -0700
Message-id: <CF285BD7-E179-4AC0-89FA-7623F500E1DD@xxxxxxxxxxxxxxxxxxxxxxxx>
Bill and Mills,

Declarative is essentially a look-up table with content that was computed in advance.  This is very common in the gaming world where design and function are pre-computed, stored, then looked-up when needed during the process of the game.  Declarative is the fastest means to get data from memory to the user's screen - it bypasses the search, compile, display process.   In the case of declarative knowledge systems, the lookup is a semantic web of any level of complexity - a lattice work of layers upon layers of rational paths of concepts, ideas and thought patterns that are organized according to the theory that justifies their relationships.  These semantic webs are like huge jungle gyms found on a playground, completely interconnected in every dimension required to represent desired knowledge.  These webs of content lie in waiting in memory, on hard drives or other media, and respond immediately as their associative links are activated.  In this sense, knowledge-based declarative stores embrace situation awareness. 

Declarative storage systems are built using at least three methods.  Physically writing the code.  Copying and pasting object code, or through the use of a builder.  Since our interest is in building knowledge stores, we use a builder.  From our perspective, the object oriented world is going in exactly the opposite direction from where we are going.  Visual Studio 2005, Microsoft's latest wonder, has 60,000 objects.  Every time an object is pasted into a program, all the API's for the most common platforms are loaded with the object, this creates a monstrous pile of code (read waste).  Intel, AMD and storage device vendors are happy about this.  The object mindset represents only more of the same infoglut that the semantic web people are trying to solve.  

Building a declarative system using a builder offers an alternative approach and methodology.  This approach requires that knowledge engineers develop EditForms to model the knowledge content of books & documents (two examples of declarative objects), databases, etc. simply by parsing out the concepts, ideas and thought patterns represented in these sources.  The builder then applies specially written algorithms to construct the semantic web.  Since the machine is working only with concepts, relationships and mediating structures, it functions similar to the human brain - declaratively.  We don't squeeze ideas out of our heads, they just pop into our minds (if we have the theory).  Because it is declarative, and not procedural, meaning that it is not a moving target, content can be continually added to the knowledgebase to fill-in knowledge gaps as required.

There is more.

Dennis


On Sep 16, 2007, at 5:25 AM, Mills Davis wrote:

Not sure I could explain it without help. Isn't the connection something about  the "cost" of computing, conservation of energy, symmetry, entropy, shannon limits ?

Usual distinction is that declarative is about "what"  while the "how" is left to the computer to sort out.  So, there'd be an algorithm that would use the declarative as the input.  So, why talk about pre-computing?  Ahh,  it's about the economics of information versus knowledge:  pre-compute everything you can  in order to only pay the computational cost once.  Rather than recomputing again and again. Of course, information is by definition something new that you computationally have to act on to related it to what you think you know...etc.

Begin forwarded message:

From: Bill Andersen <andersen@xxxxxxxxxxxxxxxxx>
Date: September 16, 2007 1:30:36 AM EDT
To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
Subject: Re: [ontolog-forum] Current Semantic Web Layer pizza (was ckae)
Reply-To: "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx>

What is the connection between 'declarative' and 'pre-computed'?

Bill Andersen
Ontology Works, Inc.
3600 O'Donnell Street, Suite 600
Baltimore, MD  21224
+1.410.675.1204 (w)
+1.410.675.1201 (f)
+1.443.858.6444 (m)


On Sep 16, 2007, at 12:28 AM, "Dennis L. Thomas" <DLThomas@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:

Hello Frank,

I think current technologies will be with us for few more decades, but I am certain it will change whether we want it to or not.  There are too many knowledgeable minds pounding away on the complexity issue, making it probable that a new paradigm of computing is just over the horizon.  We think that new paradigm is knowledge computing.  It's declarative (pre-computed).  It will co-exist with the "information-based" procedural world. 

When I refer to structural limitations, I am referring to the limitations of tables and fields (relational or object oriented), and to the quadric complexity resulting from indexing - the primary reasons systems cannot scale.   The work-around is the Semantic Web Layer Pizza (was cake), which adds more layers of structure, complexity and burden on an already costly procedural process. Every time a command is executed, the system computes the same data over and over, ad infinitum.  The cost is time, redundancy, excess equipment, facilities, utilities, inefficiency and so on.   After more than seven plus decades, do we really need our computer systems to tell us that 2 + 2 = 4.  We know that, and we know a lot more.  Our computer systems should be able to give us precisely the "knowledge" we need, when we need it, wherever we are.  This includes making very complex decisions about most strategic, mission trade-off or legally defensible situations.  Or perhaps, the simulation of all of civilization's knowledge.  

I completely understand the economics of paid-for infrastructure, trained IT staff and marketing positioning, but change is inevitable.

I appreciate your working explanation of how the brain functions.  It is our understanding that human DNA has an estimated 5 billion neural bits of power, each bit with decision capacity - albeit yes/no.  When the storage limits of DNA reached the point of diminishing returns (higher life forms could not evolve), nature gave us the brain.  The human brain is estimated to have a storage capacity of 10^14 or 100,000,000,000,000 neural bits of memory and decision power.  If computers are bit oriented like the human brain (granted, machines lack that certain chemical something), how can they be taught to KNOW like humans and later, how to LEARN like people to solve problems on their own?

It is interesting to us that after more than 17 years, the Knowledge Management industry has yet to define what knowledge is?  Richard Ballard has worked on this problem for more than two decades.  His answer is: KNOWLEDGE = THEORY + INFORMATION.    Theory gives meaning to data (information), without it, there is no meaning.  Information (data/reality) is the who, what, when, where and how much facts of situations and circumstances.  Theory answers our how, why and what if questions about those situations and circumstances.  Theory is predictive and last for decades, centuries and millenniums.   No theory, no meaning.  We learn theory through enculturation, education, life experience and analytical thinking.  It is our contention, based on the knowledge formula, that machines can simulate every form of of human knowledge and reason with that knowledge like people.   

Best Regards, Dennis   



On Sep 15, 2007, at 6:54 PM, Frank Guerino wrote:

Hi Dennis,

Being a leader of an enterprise that offers pretty advanced semantic web solutions, I figured I’d step in and contribute.  Your statement “From what I understand, the problem has to do with the structural limitations of procedural system applications.” is not something I agree with.

We find that the issues is not the existing systems but the existing data.  People build systems “around” data.  We find that the world is loaded with lots of system designers and implementers but very few data designers and implementers.  The systems that exist are a symptom of bad data.  We, as humans, want to believe that we can throw out an essay, on the web, and that some system will auto-magically read that essay, break it up into it’s relevant pieces, categorize things, make associations, file it all away, and allow for recall, at any time, in any way... just like the brain.  However, if you break down how the brain works, it is much smarter than the humans that throw out content.  From the split second that information is brought into the brain, from any and all working senses, the brain “instantly” starts to break it down, categorize it, correlate it, store it, etc.  The brain instantly breaks things down into neatly organized and highly definable “pieces” that fit into spatial and temporal relationships.  This is why you can recall many things that are “red” when you think of the color “red”.  The brain makes the effort to neatly file bits and pieces of the bigger picture, at the time of creation, within the brain.

Humans, don’t follow this practice with the data we create.  For example, we write very long bodies of work that are contained within constructs we call “narratives, stories, essays, etc.”.  These are very coarse constructs have very limited descriptive metadata about what is contained within them.  The brain, on the other hand, does not store things in such coarse constructs.  It stores things in very “fine”, “small” constructs that have very precise meanings because of the relationships that are bound to any one construct.

For semantic web to work properly, humans will have to change the way we think about and work with data/information/knowledge.  Things like natural language processors, correlation engines, etc. are currently being explored to solve problems very different ways than the brain solves them.  There is a very high probability that they will not solve the “semantic” problem in our lifetimes.  However, there are a few enterprises out there that get what the real problem is.  Even the Semantic Web standards, such as RDF and WOL (OWL) cater to a whole new way of dealing with data that is radically different than the way we do so, today.

We have proven all of this to ourselves and our customers, both, in our research and in our implementation.  Because we focus on data, we can easily build a system around the data that works far more like the brain than any other system we’ve seen.  We naturally create data, relationships and meaning as people work, which allows the effective and powerful reuse and understanding of data, later, when it’s needed by different people, at different times, under different contexts.  This is not to say that we’ve achieved anything close to what the brain does or how it does it.  There are no systems we know of that have.  We’ve simply achieved some pretty different and impressive things, all because of changing how we think of and work with data.

So, in summary, if we want to get to a truly semantic web, my experience tells me that we shouldn’t be focused on changing the systems that exist.  Instead, we really need to be focused on changing how we publish and work with data.  If we do not, the systems that exist will continue to exist and grow to solve the data problems at hand.  They are a symptom of the problem, not the problem or the solution for that matter.  If we change how with work with data, new systems will evolve to appropriately deal with these new approaches.  These new systems will be very different than those we see and are accustomed to, today.

NOTE: I agree very much with your last paragraph.  My post is not to diminish your point but simply to point out that our experience tells us that data is the primary issue, not the systems that work with the data.

Anyhow, I hope this helps.

My Best,

FG

--
Frank Guerino, CEO
TraverseIT
On-Demand Knowledge Management
908-294-5191 Cell
Frank.Guerino@xxxxxxxxxxxxxx
http://www.TraverseIT.com




On 9/15/07 8:44 PM, "Dennis L. Thomas" <DLThomas@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:

Mills,

This is, no doubt, the goal of semantic technologists - to achieve machine representations of every form of human knowledge to include values and beliefs - the stuff that underlies human culture.  This includes the capacity to reason across this knowledge to answer questions and to predict outcomes and consequences.  I found it interesting that Paola and Stephen Williams mentioned in a previous discussion that "We now are increasingly bumping into the limitations of simple triples," stating that "quads" were appearing on the horizon, perhaps as the "next gen semantics?"  

>From what I understand, the problem has to do with the structural limitations of procedural system applications.   I think Stephen Williams signal's his agreement with this when he brought up the "K-arity" concept ("The K-arity PKR effective structure of knowledge, where K={3-10}, seems to cover it.")  Richard Ballard has long contended that general knowledge representation requires n2-n12 "n-ary" relationships, but that medical diagnosis and other complex situations may require hundreds of conditional relationships.    As noted in your own 2007 report, a Physician must know 2,000,000 concepts to effectively practice medicine.  It is not unreasonable to think that it might require a few hundred of these concepts to diagnose a non-specific internal medical problem.

Williams mentions several other requirements for a robust semantic system such as "statements versioning" or "timestamps," security levels, ownership, etc.   In this regard, Ballard states that all knowledge can be represented when each concept includes metaphysical, physical and time (universal, occurrence, continent) representations.  In Ballard's world, the upper most primitives are metaphysics, physical reality & time. 

The problem of modeling culture (generally refers to patterns of human activity and the symbolic structures that give such activity significance), is that conventional software cannot scale to seamlessly integrate all the concepts, their relationships, and the theory behind these concepts to achieve the meaningful points of view that are required to faithfully represent cultural "patterns of thought," at any level of granularity.  Perhaps with the work of Paola, Williams and others, such a system will become a reality.  

In the meantime, we are still confronted with the complexity problem.

Dennis  

Dennis L. Thomas
 
Knowledge Foundations, Inc.
 
dlthomas@xxxxxxxxxxxxxxxxxxxxxxxx

On Sep 14, 2007, at 3:25 PM, Mills Davis wrote:

Gary,

I think this paper aims to articulate knowledge modeling needs of scholars studying cultures and the history of ideas. Historicity cannot be an afterthought and must accommodate the notion that concepts and categories evolve, including category theorys. 

The development of UMLS in the life sciences provides an example. This started as key words, grew to curated taxonomies, then to a synthesis across 80 or so vocabularies and ontologies in a megathesaurus.  Over time, it was found that concepts e.g. of a disease, may persist  through time and across different terminology, but also the concept as well as the names by which it is known may evolve through time and further research.  So, they changed their practices and how they modeled things.

Much of the history of IT has been preoccupied with record keeping and current accounts. Data has been shoved into box cars called fields, and retrieved from them and manipulated algorithmically with fixed logic. That is, system knowledge is fixed at design time and doesn't change, until the next version of the system is published, with the next version of knowledge encoded into it. During operation IT systems haven't learned, they just follow rote procedures.  

Historically, there are many good reasons for the information and algorithm centric approach. The study of culture, however, calls for a richer palette both for knowledge representation, and for reasoning processes that encompass different axiologies, epistemologies, and research methodologies.  

Description logic plus some overarching notions about logic unification (at the FOL level, I believe) is about where we are with the semantic web. We can expose data.  But, todays semantic web standards do not provide an adequate foundation for the sorts of cultural researches, and knowledge-based computing that this author and other scholars envision, and are engaged in already.

Mills




_________________________________________________________________



Dennis L. Thomas 
Knowledge Foundations, Inc.
Ofc (714) 890-5984 
Cell (760) 500-9167 
------------------------------------------------
Managing the Complexity of Enterprise Knowledge



----Managing the Complexity of Enterprise Knowledge




_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx


_________________________________________________________________


Mills Davis
Managing Director
Project10X
202-667-6400
202-255-6655 cel
1-800-713-8049 fax







Dennis L. Thomas 
Knowledge Foundations, Inc.
Ofc (714) 890-5984 
Cell (760) 500-9167 
------------------------------------------------
Managing the Complexity of Enterprise Knowledge




_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>