[Top] [All Lists]

Re: [ontolog-forum] Axiomatic ontology

To: "Pat Hayes" <phayes@xxxxxxx>
Cc: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Rob Freeman" <lists@xxxxxxxxxxxxxxxxxxx>
Date: Sat, 9 Feb 2008 16:08:48 +0800
Message-id: <7616afbc0802090008q5970501dl60de79cf7bd7f275@xxxxxxxxxxxxxx>
Hi Pat,    (01)

Thanks for taking the time to ask the questions.    (02)

On Feb 9, 2008 1:21 AM, Pat Hayes <phayes@xxxxxxx> wrote:
>  ...if you can extract information from it, it isn't in fact random.    (03)

With a nod to Wittgenstein, words trip us up in all kinds of ways. We
use the word "random" in the sense of "meaningless" sometimes, and
"unpredictable" other times.    (04)

I think you are using "random" in the sense of "meaningless" here, but
drawing conclusions beyond the scope of this meaning. In particular I
don't think meaningfulness need imply predictability.    (05)

But these are slippery concepts. If the discussion shows nothing else
it shows randomness is poorly understood. At the very least we need to
think about it more and should not simply dismiss information sources
(like the Web) because their elements are related randomly
(=unpredictably.)    (06)

> ...You seem here to be talking about something like a hologram (?)    (07)

A hologram codes different information from different perspectives, so
maybe the analogy is apt, yes. It's been a while since I looked at
them but I remember being fascinated when I did.    (08)

It would be interesting to know if it is possible to compress a
hologram without losing resolution. If the analogy of randomness
holds, then it should not be possible.    (09)

I was also struck by Jakub's comment he associates Kolmogorov with the
learning complexity of neural networks.  Do people in that field
understand the power of a network to represent more patterns than
there are elements? I've long felt the problem with neural networks is
that no-one has thought to use the network itself as a representation.
In traditional neural network applications most of the vast
representational space they offer is not used. Instead we use their
separate robustness property to try and represent problems in terms of
classical classes again, and abstract out (robustly) as few classes as
possible.    (010)

>  BTW This "not being able to form more than one at a time" starts looks
> a lot like an uncertainty principle, or at least Chaitin's Omega.
>  Which is what I thought you might be alluding to.
> No, and indeed I don't know that term. I will go and find out more about it,
> thanks for the pointer.    (011)

You should read Chaitin himself. He's an entertaining speaker. Here's
a talk I like:    (012)

http://www.cs.auckland.ac.nz/CDMTCS/chaitin/cmu.html    (013)

Anyway, it is easy to get lost in the technical detail of this. Just
to put it in context, what I am suggesting is good news. For one thing
it would mean all that random information on the Web might be a richer
source of information than we imagine, exactly because it is random.
That is, with reference to Sean Barker's posts, the Web may be able to
store more information about him exactly because his name alone tells
us little. In this view all that would be needed to access this
surfeit of information would be to accept it can only be understood as
different contradictory wholes, and focus on picking out the patterns
which matter to us at any given moment.    (014)

In fact because I'm suggesting all meaning works like this (the
chaotic cognition thing), and so is intrinsically
random/contradictory, something random like the Web (or natural
language) is the only thing which has a chance of representing
knowledge as a whole, at all.    (015)

-Rob    (016)

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (017)

<Prev in Thread] Current Thread [Next in Thread>