On Oct 12, 2008, at 6:49 AM, Rob Freeman wrote: John,
Thanks for your analysis. That puts it nicely. Anything you can say in any logic, you should be able to say in English. In counting the meanings of English we need to count them all.
I don't know if your observation that the number would be countably infinite is true also. What matters to me is that, even for finite sets, the number of possibly "meaningful" relations could be very large (unless we impose an artificial limitation in some way.)
But do you have any reason to suppose that it WILL be large, in fact? Take an example, the set of pixels on my screen. This is finite, it has 1280 x 854 = 1093120, roughly a million, elements. The power set of this has 2(109310), a number too large to even imagine, one which would have over 300,000 digits if it were written in decimal. Each one of these is a possible (black and white) image on my screen. But very few of these images would be meaningful. Most of them would look like random 'snow'. Only a vanishingly tiny fraction of this large total would even be recognizable as screenshots in the usual sense. Moreover, for every one that is recognizable, there will be a large 'cloud' of nearby images which are almost exactly like that meaningful image, but have just one tiny change. Even if we only allow, say, five pixels to change  a change so small that it would likely not even be noticed  that means that there will be n.(n1).(n2).(n3).(n4)/(5!), which in this case works out at around ten to the power 28, images which are all essentially the same 'meaning', since they are virtually indistinguishable from one another. So just acknowledging the limitations of perceptual acuity reduces the number of effectively different 'meaningful' combinations by startlingly high factors.
As Pat H. says:
"This is why for example classical higherorder logic is undecidable, because its semantics requires that its interpretations contain all relations in this mathematical sense (and when the base set is infinite, this makes the set of relations much more infinite.)"
Now, Pat deals with this
No, Pat doesn't "deal with" it. Firstorder logic deals with it. And 'deals with' is rather tendentious, suggesting a reaction to some problem. But in fact, FOL is older than highorder, and does not "deal with this" because "this" simply does not arise in FO reasoning. Nor, indeed, should it arise, IMO, its a nonproblem, a nonissue. Its not even precisely formulated.
by limiting the relations he considers
No, FOL does not limit the relations in any way. If you tell me about a relation, you can tell me that in FOL. No restriction of any kind on the nature of the relation you want to describe. The limit is only on the total size of the set of all relations. But you will only have time in one lifetime, indeed in the entire lifetime of the known Universe, to tell me about finitely many relations. Why then would you want to insist (if indeed you do, but as HO logic does) that there must be an uncountable infinity of them? :
"...this doesn't matter because the logic isn't obliged to consider all of these relations, only enough to provide denotations for all the relationnaming terms in the language."
In short Pat assumes some kind of other semantic model
I'm talking here about FO semantics as opposed to higherorder semantics. Were you then arguing for the use of classical higherorder logic? , and that other model limits the relations he needs to consider, so this explosion of possibly meaningful sets "doesn't matter", QED.
"doesn't matter" cashes out as the FO completeness theorem. Pat doesn't like the explosion of relations. He needs some other system of meaning to tell him which ones are meaningful:
"Yes, these sets are all distinct. I don't know if they are distinct in "meaningful ways" because I don't know what you count as a 'meaningful way'."
But Pat, nobody knows what to count as a "meaningful way".
Why then did YOU introduce the word into the discussion? Your arguments are all based on the large cardinality of sets of combinations. But you regularly use this terminology of "meaningful" . For all we know, combinatorial arguments may not even apply to meaningful sets. For example, is the intersection of two meaningful sets also meaningful? Does one always get a meaningful set by adding an element to another meaningful set? That is what is at issue.
Why is this an issue? Why do you think this (underdefined and apparently unanswerable) question is important? I don't think there is even a question here to be answered or an issue to be addressed. What right do we have to be throwing out all the combinatorial explosion of possible meaningful sets suggested by set theory?
Set theory does not suggest anything about 'meaningful' sets. It simply talks about sets. Yes, there are a lot of sets. Nothing at all follows about how many 'meaningful' sets there are. The fact that human beings do not seem to be involved in combinatorial explosions when they think or speak rationally, suggests that the number of meaningful sets might be a lot less than the number of combinatorially possible sets. And one can give many other reasons to suggest that there are far fewer 'meaningful' sets than sets altogether: for example, the fact that people can do induction quite successfully, or that oneshot learning often works very well. You may hold other opinions, but it falls to you to come up with some reasons why this might be an issue. Simply reiterating elementary mathematical facts doesn't cut it.
I don't blame you. It is normal to look at this combinatorial explosion as a problem.
Computationally, it IS a problem. Perhaps that is because of the "intractability" of such a system from the point of view of logic. But that same "intractability" has a flip side. It can also be seen as a power.
Of course. There is a fundamental tradeoff, much remarked on and studied, between expressive power and computational tractability. More expressive languages pose harder computational problems for detecting validity. To put the matter very crudely, being more expressive means you can say more , which means there are more ways to prove things, which means that searching for proofs gets harder as the search space gets bigger. (This is a caricature, but it gets the idea across.) What it tells us is that even for a finite "base set", you might elaborate a very large (countably infinite?) number of meanings.
Whoa. How do you get a countable infinity from a finite set? Not by combinatorics alone. You also need a productive grammar of some kind. It might be as simple as stringing them together in a sequence, of course. BTW, often, what makes one language more expressive than another is having a richer variety of ways of combining its basic symbols together, rather than a change in the actual symbols. FOL and HOL look very similar and use the same basic vocabulary. So a finite number of examples can specify an infinite number of categories. Not just infinite strings of bits.
Well, they are all encodable as strings of bits, of course. Finite strings specifying infinite numbers of categories too.
Am I the only one to see that as quite powerful?
No, you aren't. You have just summed up one the central foundational notions of computer science. What I don't understand is why we are not using finite sets of examples to specify (very large or countably) infinite sets of categories in this way.
We are, in THIS way. Its called using finite languages. We all do it, all the time. This message is a long text written using only about 30 basic symbols. All formal logics do it as well.
Pat

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontologforum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontologforum/
Unsubscribe: mailto:ontologforumleave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontologforum@xxxxxxxxxxxxxxxx (01)
