To: | "[ontolog-forum] " <ontolog-forum@xxxxxxxxxxxxxxxx> |
---|---|
From: | Pat Hayes <phayes@xxxxxxx> |
Date: | Fri, 7 Mar 2008 17:54:37 -0600 |
Message-id: | <p06230906c3f77dbe1357@[10.100.0.20]> |
At 6:06 PM -0500 3/7/08, Patrick Cassidy wrote:
... I do not say that the ontology ³defines² terms, I said it has a ³logical specification of the meaning² of the terms. I also already described what I meant by that. Pointer?
To answer the one question that raised a new point: [PH] >> There is no such thing as THE necessary conditions. I presume you mean "some necessary conditions". But this is a very weak constraint indeed. Will ANY facts do? The necessary conditions that it is my goal to include in any specification of a type are those that are sufficient to distinguish that type from all others in the ontology, and that are consistent with the intended meaning, requiring that they apply to all entities that are intended to be instances of that type. For many types, one could add additional necessary conditions, and ideally all would be included. But the ideal may well be too time-consuming to achieve. So I set a minimum criterion of being able to distinguish one type from all the others. As time permits, additional restrictions can narrow the meaning. I'm still completely unable to see how this is a useful notion.
It is too vague to support any theory, and too ill-defined to mean
anything in practice (are you referring to properties of the type, or
of the instances of the type? What do you mean by 'ALL necessary
conditions' (my emphasis)?)
[[[2]]] ontological disagreement: [PH] >> disagreeing heatedly about whether or not a framed picture on a wall was in the office or part of the office Would anyone misunderstand if one said that the picture is *in* the office? Probably not. But that is part of my point: people are tolerant
of (what may appear to them to be) mis-use of words by others,
provided that they are able to extract enough meaning to enable their
current goal to be achieved, whatever that happens to be. But this
gives us a very loose fit between words and meanings, and this
looseness is why we cannot presume, for an agreement expressed in
words, that there is any exactness in the match of the underlying
concepts. We certainly cannot conclude that the mental ontologies are
anything like identical. All the evidence suggests otherwise.
The use of the word ³part² is extremely broad, and there would be no difficulty defining a general sense in which the picture is ³part² of the room in the broad sense, and also contained in the room. Perhaps you had to be there to appreciate the intensity of the
analysis of meanings that took place. We got to issues like this:
bring a can of paint into a room; both agree that the paint in the can
is in the room. Dip a brush into the can, lift it out: is the paint on
the brush in the room? (Yes.) Now apply paint to the wall. Is this
paint on the wall now in the room? (Disagreement.) Allow the paint to
dry: is it now in the room? (Also disagreement, but between different
people.) When the door (which opens internally) is open, is it - the
door - in the room? (Disagreement). Is the glass in the window in the
room? (Agreement: no) Is the inner surface of the window glass
in the room? Disagreement. And so on. These are not disagreements
about words or phrasings, but about meanings and concepts.
It seems that no one in that discussion considered the possibility that both views could be represented consistently That was not the point. The point was to simply list the things
in the room, as a preliminary to another part of the process, and this
disagreement surfaced to everyone's surprise, and was then examined in
detail. Everyone, including the protagonists, was astonished to
discover how extraordinary and 'obviously' false other people's
intuitions seemed to them to be. And that is my point: people's
intuitive ontologies of everyday terms, when examined in depth, do NOT
agree. People do not share a common 'mental model' of the everyday
world they - we - all inhabit. If they did, how could this mismatch of
intuitions have arisen?
(e.g. by distinguishing ?structuralPart¹ as a subrelation to distinguish it from the more generic ?part¹ relation, accommodating both views). The notion of disjoint relations (a part cannot be contained in) is not one that typically occupies someone learning the basic language. Meanings of the general terms do overlap, but are still interpreted correctly when used in specific contexts. I would not classify the picture as a ³structuralPart² of the room (the more specific relation), but would have no doubt as to the intended meaning if someone said ³pictures, furniture, and all other parts of the room² though it would sound odd. No doubt, but then the same applies if you heard "Pictures,
furniture and all other shrbldbldsdajhf the room." We humans are
very good at extrapolating intended meanings from partial and faulty
evidence.
[[[3]]] [PH] >> HOW? How is it possible for two children in kindergarten to align their mental ontologies? The idea doesn't even make sense: it would require them to be telepathic. No, no telepathy needed. They need only both hear a word used in reference to the same objects or events. There is no evidence, psychological or computational, that such
mere associating of word sounds with objects is enough to learn
language from. How would one get from this the meaning of a word like
'embarrassing'? You are just not talking about anything remotely
real here.
See also [[[7]]] below. [[[4]]] [PH] You are however assuming that these 'implied ontologies' are themselves logic-based. (If you deny this, please explain what you mean by an 'inference' when talking of brains.) Inferences I refer to here are of the sort: when someone says ³I took a train to work² this implies that (among other things) (1) s/he went from home to the train (means unspecified); (2) boarded the train; (3) paid a fare (probably, possibly prepaid); (4) debarked from the train; (4) went from the train station (existence implied) to work. And others that are also understood in hearing that phrase. I do not know the exact neural pathways over which the signals flow, nor in what combination they conjure up images implied by assertions. I do consider these implied facts to be the equivalent (in thinking) to the inferences that are drawn by our logic programs. The ?implied ontologies¹ include a subsumption lattice and relations, but encoded in the neurons. They are logic based in the broad sense that people can use if-condition then-condition rules, unconsciously, and do ³inferences² by coming to the same conclusions from the same information, as would be arrived at by a formal inferencing procedure. Well, that is a popular AI conception of the brain, but there is
absolutely no evidence that it is even remotely correct.
[PH] We have, in fact, no real idea how the brain does inferences, or even if it really does them at all in any meaningful sense. OK, you don¹t think that the kinds of brain inferences I describe above should be called ³inferences². No, I don't think there is any evidence that the brain does this
at all.
Fine. Let us disagree as to that term, and then you can forget the comparison of the inferencing speed of a brain and a computer which is very peripheral to the topic of the discussion. I agree this is not a very promising line of discussion. I
mentioned it only because you began with it as the first line of your
argument.
[[[5]]] [PC] >> It¹s (the word ?dimension¹) not part of the linguistic defining vocabulary of words we all agree on. >> It¹s needed for math, and in that context, describing visualizable two or three-dimensional spaces, >> is probably uncontroversial. When used by analogy to refer to other concepts, I am not surprised that >> terminology disputes can arise. [PH] > We are talking past each other. Im not interested in terminological disputes. Im talking about disagreements about the logical/conceptual structure of rival ontologies. Well, the way you described the dispute it sure sounded like a terminological issue. One group wants to define ³dimension² in a way that excludes time, another group wants to define it so as to include time. No, the debate is normally not even conducted in terms of
dimensions. I assumed you were familiar with this well-known
distinction, but perhaps you are not. It has been done to death in
various 'upper ontology' forums.
... No, I am asserting that a common *basic* ontology (the conceptual defining vocabulary) is possible. I know you are, but you havn't given us any evidence for that
claim, and there is a great deal of evidence to the contrary. I doubt
if anyone who has tried to actually construct a large-scale upper
ontology would find it even remotely plausible.
The point above is made as one part of the explanation of why people who try to build ontologies together often disagree. Those theories are not part of the basic ontology, they would be part of an extension that is specified in terms of the basic ontology. But this debate is about how to formulate temporal descriptions
of change, processes and objects. Surely this must be part of any
basic ontology?
[[[7]]] [PC] >. If a child learned language by itself, why don¹t the feral children speak the native language of their area fluently? [PH] > I meant, without adult intervention. If you think this is nonsense, learn something about > sociolinguistics and the distinction between pidgin and creole languages. > A new 'native' language can evolve in a single generation: the kids really do invent it > themselves from talking to one another. None of their parents speak it. I am aware of the spontaneous creation of language among children under those circumstances, and of sign languages as well. But now you seem to be contradicting yourself. Not at all. If the kids manage to create a language among themselves, how is that possible except by attaching words to common objects and events that they observe I have no idea how they do it. If I did, I would be famous in
psycholinguistics. But I do know a fairly long history of failure of
simplistic ideas about how it is done, of which the one you describe
is one of the first, and all of which have been shown to be wrong or
inadequate. The most trenchant observation about yours is that neither
kids nor adults actually speak in the object/word way that would be
necessary for this to work.
an idea that you dismissed sarcastically? I dismissed it because it is old, tired and has been amply
refuted.
[[[8]]] [PH] > Your anti-intellectualism is almost palpable No, I have great respect for intellectuals, even when they are at their most irritating. I *have* noticed that some of them like to argue more than seems necessary. Do you mean, when they disagree with you? Pat
-- ---------------------------------------------------------------------
IHMC 40 South Alcaniz St. Pensacola FL 32502 http://www.ihmc.us/users/phayes phayesAT-SIGNihmc.us http://www.flickr.com/pathayes/collections _________________________________________________________________ Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/ Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/ Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx Shared Files: http://ontolog.cim3.net/file/ Community Wiki: http://ontolog.cim3.net/wiki/ To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (01) |
Previous by Date: | Re: [ontolog-forum] Ontology similarity and accurate communication, Patrick Cassidy |
---|---|
Next by Date: | Re: [ontolog-forum] Ontology similarity and accurate communication, Patrick Cassidy |
Previous by Thread: | Re: [ontolog-forum] Ontology similarity and accurate communication, Patrick Cassidy |
Next by Thread: | Re: [ontolog-forum] Ontology similarity and accurate communication, Patrick Cassidy |
Indexes: | [Date] [Thread] [Top] [All Lists] |