Pat, (01)
I agree with the following statement: (02)
> The definition of a random sequence is that no matter how much
> of it you have, there is no way even in principle to compute
> any information about the next item. (03)
I also agree with the following statement, but I would qualify it: (04)
> This is why their information capacity is as high as it can get,
> because you can't compress them into a smaller package. (05)
You can, of course, do superficial compression. For example, you
might get a random string of 1000 alphabetic characters stored in
1000 bytes. But if they only use 26 letters, the string could be
compressed by a factor of log2(26)/8. (06)
> But this also means that you can't in any sense parse them. (07)
In our work with graphs, we often generate random graphs (actually
the usual pseudo-random stuff you get on digital computers). But
we can parse common computer representations of graphs, even random
graphs, to generate a much more compact representation. (08)
But that isn't violating the following principle, which I agree with: (09)
> you can't find any structure in them to utilize to say something
> about something else. (010)
John (011)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (012)
|