ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Event Ontology

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
Cc: semantic-web@xxxxxx, 'Dan Brickley' <danbri@xxxxxxxxxx>
From: "Rich Cooper" <rich@xxxxxxxxxxxxxxxxxxxxxx>
Date: Mon, 7 Sep 2009 16:03:25 -0700
Message-id: <20090907230414.AA88E138CF7@xxxxxxxxxxxxxxxxx>

 

 

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

 

You wrote:

 

Those are totally independent ideas, which can be mixed and matched

in any way you want:

 

- Nominalism vs. realism is the issue of whether the laws of nature

   refer to something real or whether they are arbitrary patterns

   that somebody has merely given a name to.

 

Exception: names can be given to the unreal, imaginary, constipated, undefined or responsible, but each of those names implies a structure of common understanding among hearer and hearee.  I'm pretty sure you and each reader understand that the value we associated with each name in our vocabulary isn't in the name (a rose is a rose etc) but in the experience of the designee (smelly, nice, red etc).  Whether the common concept we share communicates any element of reality is debatable.  These are the blocks we move in the game, like in the Winograd blocks world.  We talk to each block and listen to the response if we find them interesting.  If you like realism, cool!  I like gritty errors that can be used to put twos together.  That way, every time a theoretical structure cracks by inserting that error, a nice new one is formed without it; it just takes a little time to crack and restructure the solid bits better.  Then the game continues for the rest of the tick.  Tock.  Time to continue.  We still have work to do. 

 

- Grouping things in sets is used in every approach.  Nominalists

   would say that all groupings are more or less arbitrary, and

   realists would insist on looking for the principles for grouping.

 

But if causality is designatable but not predictable by your (my) definition, how do you define the sets?  It would have to be extensionally, if you are using the system to completely predict the next tick.  If not, the intensional subset hasn't been absorbed into the extensional one yet.  It's just a matter of ticks.  Induction (Whoops, examples of induction: recursion; iteration; problem reduction; structured programming; event scheduling etc). 

 

- Reductionism is the idea that there is an ultimate foundation

   that everything else can be reduced to -- e.g., biology can

   be reduced to chemistry, and chemistry can be reduced to

   physics.  In effect, belief in foundations is more realist

   than nominalist.  However, you can have debates between

   realists and nominalists at each level about the reality

   of the principles discovered at that level.

 

My personal preference is for realism about the laws of nature.

 

Reductionism is as real as algebra, lisp, sql, or other convenient context management apparatus, method, representation or specification.  Concepts are real because we speak their name to each other (us, not them) and hear the responder signal back a unique interpretation.  So a concept is a sign in a time slice located between at least two people with enough interest to discuss it, and each will give it their own unique name, and perhaps definition, if they choose to remember it at all.  Only the terminal nodes are semantic.  Concepts are useful nominal handholds to pick and carry the big reality loads that language users sling around nominally.  Anaphora (Category error instances) are slips of the mind, convenient compression tactics, or other .  Perhaps due to not enough information (misnamed, ununifiable, defective context projected...), or bad signal/noise (lots of extraneous symbols, unrepresentative data ...).

 

JS> I prefer to use the word 'set' for a grouping that is neutral

with respect to the existence or nonexistence of some principle

for grouping.

 

Your intension is not the word.  The word 'set' is itself a nominal signal with a substantial baggage (impedimentum) of definition specs to interpret when it gets applied to an instance.  Then the definition changes with time and has to be relearned using empirical error management.  There is no need to solve every problem NOW, certainly not those which haven't occurred YET; predicting the future is itself an error; patience is rewarded with future errors, to be analyzed then.  The analysis improves the play (theory, missive, design, artifact...) for interpretation (emulation, analysis, query...) and the play's the thing. 

 

I prefer to use the word 'type' for groups that are determined

by some principle -- either a law of nature or a human choice.

 

Although I agree that the principles of biology are based on

chemistry and the principles of chemistry are based on physics,

I also believe that there are laws at each level that would

be extremely difficult, and probably humanly impossible, to

translate directly to the lowest possible level.

 

You convinced me.  Thus the need for recursion; iteration; replication - choice among alternative equivalent interpretations.  Each concept has a unique context (possibly) which we haven't YET decoded, other than perhaps as far as naming (nominally) the observable constituents in that same context box (from which we recursed to do the analysis of the constituents).  The error is in our projections of past concepts onto new concepts which don't relate - those are orthogonal context projections, namable perhaps, but with its constituents still TBD IMHO.  That makes them projection patterns echoed from the scene to the observer: not realities.  Allegory of the cave.

 

By capturing those projections (not the data itself), we design a vocabulary and axiom set in our personal projection of that slice in time.  We have other projections for those other contexts.  Get enough of them, figure out which context is part of whom, specify an analysis (identify concepts, properties, processes), consider alternative performance improvement tactics (compress, shrink, conserve, deduplicate...) for accuracy and efficiency (process improvement).  After that, you can use reductionism, modeling, math or medication to do the analysis and performance improvement. 

 

In computer system design, we develop many different levels:

high-level languages such as Java, a lower-level interface

for the Java Virtual Machine (JVM), a still lower level for

some machine architecture, such as the X86 or Power, and

multiple levels of functional units for supporting a machine

interface, which is mapped to some kind of computer circuitry,

which is itself mapped to silicon chips.

 

Multiple levels of compilers can "reduce" complex algorithms

to chips, but it's humanly impossible to understand more than

one level of reduction at a time.  To correct any bugs, it's

necessary to go back to the top level and recompile.

 

What's wrong with that?  Recompile.  The play has to be played at least once to discover its context - remember the halting problem.  From that its constituents' own interpreters (concept, context, constituents, with one completely new triple for each defined (not named) constituent).   

 

In other words, God might be able to implement and understand

reductionism, but humans can't.

 

John Sowa

 

But since we can't be God, we don't have to.  We can implement, take measurements, theorize about the results, update the old theories, update both intensive theories and extensive data, and recurse for the next tick.  Keeps you busy.  Sometimes implementation is the only way to get to another cycle.  You can't be told how to ride a bike; you have to experience it.  I couldn't give advice to my kids; they're too smart to follow static instructions.  Implementation provides a working model with flaws and inefficiencies which you can use like a binary search against your theories.  Start at the middle; choose the best direction (earlier or later theory); measure performance so far; repeat from "choose" until goal satisfied.  All you need is an enumerator of the structures to visit within the goal definition (initially stored in the DB; updated as necessary at least once in each tick). 

 

 

I would love to see you write more about recursive content management (active and passive both), because I have found your tutorials and expositions so clear and communicable to others.  Thanks!

 

-Rich

 

 


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (01)

<Prev in Thread] Current Thread [Next in Thread>