John and Pat,
Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
Your discussion is interesting, but it doesn’t fully reflect the
two ended nature of communications, IMHO. The “accuracy” or “fuzziness”
of a concept has another aspect. It can also be explained by iteratively communicated
observations from different people; each one in the grouop perceives the world
slightly differently, yet communicates it with a supposedly shared word in a
supposedly shared language. The approximations can be, at least in part,
due to the wear and tear of concept definitions in communications among the group.
For each participant, a most recently chosen definition is only one in a
sequence of changing definitions as the concept is viewed from subjective
angles. The sequence of facts discovered and deduced changes the possible
world of that person as the conversation progresses.
Engineering the (m,t) pairs just sharpens the confusion. The real
issue is to represent the totality of all the viewpoints in a group so that the
members can be tracked in a single association with the group.
So the possible world is constructed inside each listener’s
expectations. The task of a good speaker is to predict the listener’s
reactions, and to adapt to them, as needed to communicate the concept as
clearly as the speaker can present it. But each listener in a group still
has a different possible world in mind from the next listener, or from the
speaker, when the discourse has run.
-Rich
JFS>Pat,
I agree with your point, but one could adopt another
solution
that does not require a metalevel discussion of
possible worlds.
RS>> Suppose we replace true and real these two
words with a prefix
>> "approximately", what opens
up as additional attributes that are
>> required to complete model theory or a
model of whatever
PH> Good question. There is no definitive answer,
but one that can
> (and has) been given, is that to be
approximately true is to be
> true (in the original, exact, model-theoretic
sense) of an
> approximation, ie of a world which is in
some appropriate sense
> 'sufficiently like' the actual world
(assuming that there is
> a notion of a precise actual world
available, of course.) The
> great utility of this is that it keeps the
semantic theory intact
> (and unchanged), and simply adds to it a
notion of approximation
> or closeness between worlds.
Another way to formalize the problem is to add an
extra argument
to physical measurements to specify the expected error
range.
For example, a common convention is to assume a
default error
range of one half of the last significant digit,
unless a
different error range is explicitly specified.
Therefore, the statement "That car weighs 2
tons" would imply
that the weight w is in the range 1.5 < w < 2.5
tons.
This approach would require the ontology of
measurements to
be specified as a pair of numbers (m,t), where m is
the mean
value and t is the tolerance or error range.
Both approaches multiply entities beyond the Ockham
limit.
One adds "worlds", and the other adds
"error ranges".
It would also be possible to add entities called
emotions,
goals, etc.
John