Chris, (01)
I completely agree with your explanation: (02)
RV
>> All knowledge is fallible - (03)
CM
> Well, if by this you mean that things that we know can be false,
> knowledge is*not* fallible. We can't *know* things that are false
> -- cf. the traditional definition of knowledge as justified *true*
> belief. (04)
> However, if by "all knowledge is fallible" you mean only that > our
>*justification* for things that we know can be undermined,
> that is certainly true. In that case, I'd suggest that the more
> general (if not quite correct) thing to say is that all *beliefs*
> are fallible... (05)
> Note this is not to say that we can always *know* whether or not
> the axioms of a theory are all true. But their being true and our
> knowing that they are true are two different things. (06)
Yes. That is very close to what Peirce said about such issues. (07)
RV
>> Therefore, from one context to another, we need to have a way of dealing
>> with the incommensurability between different linguistic frameworks
>> associated with those contexts. (08)
CM
> Would you provide an example of different, modern day linguistic
> frameworks that are "incommensurable"? Please stick to frameworks
> that have a bearing on ontological engineering. (09)
When I commented on Richard's post, I ignored this point and discussed
the question of using "Human Intelligence" to resolve such issues. (010)
I agree that the term 'incommensurable' requires some definition.
But I suspect that people apply it to vague, confused, or missing
definitions. For such things, I would apply the epithet by Alan Perlis: (011)
You can't translate informal specifications to formal specifications
by any formal algorithm. (012)
For such things, I agree that no formal algorithm can do the mapping.
But I would also say that human intelligence, by itself, can't do
the mapping either. You would need some additional information
that could be used to fill in the gaps, resolve ambiguities, and
clarify vague points. (013)
I admit that human assistance is necessary to add that information,
but there is no reason in principle why a computer system couldn't
use search engines and reference works to do the same. (014)
Another interpretation of the term 'incommensurable' might mean
that you have two theories about the same subject for which there
is no 1-to-1 of terms from one to the other. (015)
For this case, I suspect that human intelligence would also run
into difficulties. As a systematic approach to the problem, I would
recommend placing both theories in the Lindenbaum Lattice and checking
to see which possible steps of belief revision could lead from one
to the other. (016)
That exercise might provide some insight to human intuition. But
some version of it might also be used in automated or semi-automated
methods of belief or theory revision. (017)
John (018)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J (019)
|