This mail is publicly posted to a distribution list as part of a process
of public discussion, any automatically generated statements to the
contrary non-withstanding. It is the opinion of the author, and does not
represent an official company view. (01)
Sean Barker
BAE SYSTEMS - Advanced Technology Centre
Bristol, UK
+44(0) 117 302 8184 (02)
BAE Systems (Operations) Limited
Registered Office: Warwick House, PO Box 87, Farnborough Aerospace
Centre, Farnborough, Hants, GU14 6YU, UK
Registered in England & Wales No: 1996687 (03)
Comments on Pat Hayes' reply to Rob Freeman (04)
Radar signal processing <B>theory</B> treats signals and noise as being
random, but with different probability distributions. The probability of
detection of a signal is a trade off between two types of error:
identifying a detection when there is no signal, and failing to identify
one when it is present. Given a set of observations, there is no way of
minimising both simultaneously. The only way to improve performance is
to make more observations (which is in practice what is done). (05)
The problem is complicated by the physical processes involved, which
usually shows up as a rapidly decaying auto-correlation function (i.e.
the function of the correlation co-efficient between one observation any
subsequent observation of the same phenomenon). There are a number of
techniques, such as changing frequencies each pulse, that have the
effect of more-or-less eliminating the auto-correlation tail - that is,
making the signals appear random (though without changing the
distribution function), since this makes processing more reliable. (06)
Also, early signal processing systems used "hard limiting", that is,
reducing the input signal to a stream of 1's and O's. These systems were
still able to detect targets in the presence of noise, although with
some performance loss arising from the quantisation noise. (07)
Pat's claim "The definition of a random sequence is that no matter how
much of it you have, there is no way even in principle to compute any
information about the next item." is true only where you exclude
probabilistic estimates (which you might do depending on how you
interpret "information"). For example, if you encode the tosses of a
coin as a bit stream, as you continue to observe the bit stream, you
will be able to make increasing accurate estimates of the probability
that the next bit will be a 1. Given the additional knowledge that this
is the encoding of coin flips, you will also be able to estimate the
probability that it is a fair coin. (08)
This certainly leads on to the question, "what is the ability of
statistical techniques to learn things from the web?" Indeed,
statistical based hypotheses such as "Sean Barker the baseball player"
is different from "Sean Barker the science fiction character" could
probably be substantiated on the basis of the differences in the
populations of terms found on the different sites. (09)
However, this is a separate question to axiomatic ontology. (010)
________________________________ (011)
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Pat Hayes
Sent: 08 February 2008 17:21
To: Rob Freeman
Cc: [ontolog-forum]
Subject: Re: [ontolog-forum] Axiomatic ontology (012)
*** WARNING *** (013)
This mail has originated outside your organization,
either from an external partner or the Global Internet.
Keep this in mind if you answer this message. (014)
At 6:48 PM +0800 2/8/08, Rob Freeman wrote: (015)
On Feb 8, 2008 2:21 PM, Pat Hayes <phayes@xxxxxxx>
wrote:
>
> I take it to be obvious that a random sequence cannot
encode information
> about anything other than itself. Right? (016)
I think I understand what you are suggesting. I think
you are
suggesting that while a random string is very complex,
it can't
actually code anything. If I drop my coffee on the floor
the mess
might be complex to describe, but it won't tell me
anything. (017)
I think that is intuitive, and the way we normally see
randomness. But
I'm not sure that its true, not for all random systems,
anyway. (018)
Maybe it comes down to the way the random pattern is
created. Say you
code one signal. Then you code another using the same
elements. The
second one will interfere with the first one a little,
but it need not
obliterate it. If you push enough signals on eventually
the overall (019)
pattern might appear random. (020)
But if you can extract information from it, it isn't in fact
random. The definition of a random sequence is that no matter how much
of it you have, there is no way even in principle to compute any
information about the next item. This is why their information capacity
is as high as it can get, because you can't compress them into a smaller
package. But this also means that you can't in any sense parse them: you
can't find any structure in them to utilize to say something about
something else. They are entirely used up being themselves. All they can
do, as it were, by way of communication, is to exhibit themselves and
then stop. (021)
I had better stop this conversation myself, as I am getting to
the point where I have no confidence that I know what I am talking about
:-) (022)
In fact from the point of view of each
individual element it will be random, because there will
be no way to
decide given only a single element, which pattern it
belongs to (it
will belong to many.) But combinations of elements will
still reveal
patterns. The key is using combinations to select, when
individually
the distribution is random. (023)
I think there is an important distinction between appearing
random and actually being random. You seem here to be talking about
something like a hologram (?) (024)
What will happen is you will get a whole lot of patterns
using the
same elements. And because they use the same elements
you won't be
able to form more than one at a time. But they will all
be there.
Different combinations will "resonate". And because you
can have many
more combinations of n elements than n, you will be able
to have many
more "resonances" for a given number of elements than
you have
elements (if the distribution for each element is
random.) (025)
It is that "more combinations for n elements than n"
that gives you
the extra storage space. (Some suggest it is this kind
of "larger than
itself" ability which might give the mind its ability to
comprehend
the universe, when the mind is itself smaller than the
universe.) (026)
BTW This "not being able to form more than one at a
time" starts looks (027)
a lot like an uncertainty principle, or at least
Chaitin's Omega.
Which is what I thought you might be alluding to. (028)
No, and indeed I don't know that term. I will go and find out
more about it, thanks for the pointer. (029)
Pat (030)
-Rob (031)
-- (032)
---------------------------------------------------------------------
IHMC (850)434 8903 or (650)494 3973 home
40 South Alcaniz St. (850)202 4416 office
Pensacola (850)202 4440 fax
FL 32502 (850)291 0667 cell
http://www.ihmc.us/users/phayes phayesAT-SIGNihmc.us
http://www.flickr.com/pathayes/collections (033)
********************************************************************
This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.
******************************************************************** (034)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (035)
|