Thanks to all for the comments. This is a big subject -- and maybe not always of interest to technical professionals.
But for me -- maybe it's the summer heat out here in Santa Barbara -- I am feeling the push for "big picture integration" -- so the comments and additional citations are helpful.
On the Lakoff .pdf -- hoping that Lakoff might, if he knew about it, find it interesting that some people are taking a close look at his 30-year-old book -- perhaps the crowning fruit of his years of technical analysis -- I broke the 631-page document into 100-page pieces and I'm using my Adobe account to translate them into word.doc format. For me, that makes it possible to seriously study the work and bring its points and features into focus around whatever new ideas might be emerging.
And for me, that might start with a simple survey approach to these various theories and models of classification that are cited by scholars. For me, as it turned out, I first heard about these general categories through John Sowa's Conceptual Structures , where much of the discussion hinges on an influential book of that time by Smith and Medin entitled Categories and Concepts. The first section of that book, that reviews these major areas, is available as a .pdf, and a few months ago, I scanned it into a word.docx, which is online here
With my newly emerging word.doc version of Lakoff, I am starting to gather up the major approaches to categorization, which includes the approaches reviewed by Smith and Medin (and John Sowa), but adds some additional categories and approaches – maybe not entirely different, maybe minor sub-sets – but still illuminating interesting points and facets.
For me – this entire field, scattered and divided across these multiple and apparently conflicting approaches, can be seen as ripe for a powerful new integral approach – essentially based on the classical approach, but recognizing its weaknesses (as outlined in many places, including Smith and Medin, and Lakoff) – and framing the entire undertaking in a new light, based on a few simple basic underlying philosophical principles – or, if you prefer – heuristics. This new approach has to assimilate and account for “the data” that appears to substantiate prototypical or “fuzzy” or empirical approaches – but do so in a way that liberates even these very useful perspectives from any limitations or narrowness or incommensurateness inherent in their presumptions. Can all of these perspectives, and all of their observed data, be hooked together into one interpretative framework?
This is not so easy to do. Lakoff comments on how huge his project was, how comprehensively inclusive it had to be, how many voices and contributors were involved, etc. And he was operating in the very nurturing context of UC Berkeley in the 1970’s and 1980’s, where there was a lot of creativity and interest and support. So, I am little hesitant to predict anything that might emerge for me. But there are a few big-picture points that I believe significantly reframe this entire discussion – and open explosive new power, in ways that are simply not available within the framework of any of these various alternative approaches described by scholars – due to limitations inherent in all of them, including the classical.
Major themes – that I believe seriously and absolutely reframe the entire discussion – and may help render it much more “scientific”:
1) The concept of “primitive” must be redefined. A “primitive” cannot be a highly composite object. It must be the fundamental element(s) from which these composite objects are constructed. The clear example from computer science is the concept of “bit”, or “0,1”. Most “primitives” that we encounter in “empirical linguistics” are highly composite – embedding many inherent/implicit nested sub-distinctions. Though popular and widely accepted as natural, this is very misleading, and destroys any potential for a powerful universal mathematics of language. To establish the potential for absolute coherence in language, every concept in reality must be built up as a composite assembly of distinctions as simple and primitive as “0,1”. Perhaps this concept of distinction as fundamental was first popularized by G. Spencer Brown in “Laws of Form” (1969). I would argue that “concepts are (or can be seen as) nests of distinctions”. Computer science gives us a soundly “constructivist” approach to defining these abstractions.
2) As John Sowa has emphasized – “reality is continuous, concepts are discrete”. Misunderstanding this principle is a huge source of error in concept formation, and floods of confused claims emerge from assumptions on this subject – including much of the so-called “classical” view as resisted by Lakoff. A universal theory of concepts must be based on a recognition of “potentially continuous variability” in what amounts to a “potentially infinite number of variables” or dimensions. Reality can be parsed in an infinite number of ways, depending on intention and cultural mores – and “cultural diversity” around the world and throughout history seems to illustrated this point very clearly. “Slice it any way you want…”
3) Within the framework of these basic guiding assumptions – the concept of meaning as an empirically-observable property of human behavior can be blown into the next octave by locating the source of meaning in immediate and local human intention, rather than in some widely accepted or empirical/statistical pool of word meanings (“six million word senses”). Words mean not only what the speaker intends them to mean (see Lewis Carroll - http://www.goodreads.com/quotes/12608-when-i-use-a-word-humpty-dumpty-said-in-rather ) but that particular meaning is absolutely customized in the context of any particular act of communication. This is not meant as an argument against the empirical approach – which is absolutely necessary if human beings are to understand one another – since of course, when using language we must draw on a common pool of shared understanding – in a loose kind of way that gets us “close enough for the current purposes” (“to within acceptable error tolerances”). But this new approach locates absolute intended meaning in “stipulation”, and puts control of all exact word-meaning in the hands of the person using the word, in an immediate local context (‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all’).
4) All of this is very hierarchical – and must be seen as an “interpretation” of semantic phenomena. It’s a way – a heuristic way – to order and interpret the data of cognitive experience. So, the claim that this approach is “better” or “the best” must be buttressed by claims of simplicity and elegance and practical and inclusive universality. A hierarchy is a human construction. Can such a construction have absolute ontological primacy? Could such a construction refute Lakoff’s argument that mathematics is not inherent in the universe? The best answer might be – “Maybe. Prove it to me….”
I have not yet had a chance to perfect or tune this argument – and maybe a review of Lakoff will help with this – but I would say that this concept – that I tend to call “ad-hoc top-down decomposition” (each word is assigned a stipulated meaning by the speaker in an ad hoc constantly fluctuating locally/immediately specified way) – is the key to overcoming the weaknesses of the classical method – and addressing much or all of the observable cognitive phenomena that Lakoff attributes to “embodied mind”.
Along the path of his Berkeley revolution – he and Rosch did help break us out of inherent and blind categorical rigidity and towards a humanly-configurable immediately-located approach to categories (don’t think in terms of stereotypes when asking “what kind of a thing is this?”) – but in so doing, he tends to enshrine an empiricism that in the end tends to blind the inquiry and limit the implications of the model.
All that empirical phenomena, I want to claim, in all its vast teeming highly-detailed empirical diversity, can be modeled and described in absolute perfection to x-number of decimal places by an ad hoc top-down approach – that is not bound by cultural assumptions regarding “what kind of a thing this is” – or the probabilistic empirical statistics regarding whether a penguin is really a bird.
And the ad-hoc top-down approach, by virtue of its absolute simple primitive construction – where every distinction is a decimal-place value in an intentionally-stipulated dimensional cascade – is absolutely commensurate with the basic techniques and methods of computer science.
“What is beauty?”
We don’t need to endlessly and bottomlessly debate it. All we need to do is ask you what you meant by the term when you used it, and you can tell us what you meant in all the precise specificity you can muster. In the context of that immediate local act of communication, you are the authority, and the word means what you intend it to mean.
Ok, big claims, maybe a little wild. Let’s see whether I get anywhere with Lakoff, and thanks for this discussion.
Longer version of the Lewis Carroll quote on “who is to be master”
Subject: Re: [ontolog-forum] George Lakoff - Women, Fire, Dangerous Things - Embodied Reason
Bruce, Rich, and Chris,
> It's a powerful sophisticated highly detailed and substantial book
> -- and the entire 631 pages are available in a pretty good .pdf...
Thanks for the URL. I agree that it's an important book. I bought it shortly after it came out, but I'm glad to have an electronic copy.
General observation about George Lakoff: I have a large overlap of agreement with most of his conclusions, especially on metaphors, word meanings, the relationships between syntax and semantics, and the nature of the embodied mind.
But his history of ideas is almost always *spectacularly* wrong. See the excerpt below from p. 9 of the book. I agree that every one of those points is false or at least misleading. But every one of them was debated and rejected by some Western philosophers since the Greeks.
> I haven’t yet read Lakoff [philosophy] in the flesh...
That's another good book that suffers from the same historical flaws.
I said that in my review: http://www.jfsowa.com/pubs/lakoff.htm
> the book makes a persuasive case that prototype theory is a good model
> for how humans categorize things in their world.
I agree. So did Wittgenstein. Lakoff cited Rosch, and he mentioned Wittgenstein. Rosch wrote her PhD dissertation on using Wittgenstein's theory of family resemblance. But related ideas were very widely proposed, analyzed, and debated since the ancient Greeks.
William Whewell made a strong case for prototypes in biology in 1858 (but he did not use the prefix 'proto'). Kant used the word 'schema', which was widely used in psychology by Selz, Piaget, Bartlett, etc.
Another term is Gestalt. Unfortunately, Lakoff's citation for 'schema'
is Rumelhart, 1975.
Peirce had read Whewell and Kant. He said that the notion of schema in Kant was his single most important notion, which Kant should have made the centerpiece of his Critiques. Otto Selz was a psychologist who did make the schema his central focus. Herb Simon cited Selz's notion of schematic anticipation as a predecessor and inspiration for his theory of chunks and pattern-directed search in AI.
> Current methods rely on domain experts or knowledge engineers
> abstracting a variety of observations into a system of axioms that can
> be used downstream for deductive reasoning. This can lead to rigidity,
> bottlenecks, etc.
I agree. Such methods are valuable for solving particular problems.
They correspond to the microtheories in Cyc. But they are far too limited and brittle to put in a top-level ontology. For citations and discussion, see http://www.jfsowa.com/pubs/cogcat.htm
Some corrections to Lakoff's history:
1. Pythagoras and Plato had a theory of a detached or at least
a detachable psyche. Pythagoras had a notion of migration of
souls (which he probably picked up from Eastern philosophy).
Both Heraclitus and Pythagoras lived in Anatolia, where they
undoubtedly got ideas from the gurus who traveled the silk road
from China to the Greek colonies.
2. But Aristotle had a hierarchy of *embodied* psyches, which were
not detachable. They ranged a from vegetative psyche for plants
to more complex psyches for animals from sponges, to worms, to
mammals, to humans. By the way, Aristotle was the first person
to recognize that sponges were animals, not plants.
3. The great Christian theologian Thomas Aquinas was a good
Aristotelian. He used Aristotle's theory as a basis for
explaining the dogma of the resurrection of the dead at the
end of the world: the human soul without a body is pale
shadow (as Homer said in his description of Hades) and the
soul requires the body to support all its faculties.
4. The Greek atomists, starting with Leucippus and Democritus, had
a different view, but it was also embodied. They assumed atoms
of different shapes for the four elements (earth, fire, air,
and water). They assumed that the psyche consisted of spherical
atoms, because they were more penetrating. The atoms of the
psyche swirled around and thereby directed the motions of the
other atoms of the body. (If you relate the psyche atoms to
modern theories of the electron, that's not a bad summary.)
5. The mind-body problem was invented by Descartes. It was a huge
source of confusion that the Greeks never suffered from. Many
philosophers, such as Peirce and Whitehead, had read Aristotle,
and they argued for a continuum of psychological (or mind-like)
phenomena from the lowest level to the human (and perhaps beyond).
6. The theory of prototypes was well established by Aristotle in
his biological writings. His logical writings were the source
of the theory of categories that Lakoff criticized. But in his
more voluminous biological writings, Aristole argued for a
bottom-up theory of analysis based on *prototypes* rather than
top-down definitions. He explicitly said that any definition
of species or genera must be based on a detailed description
of a specimen, and that the definitions must *change* when
new discoveries are made. Kant and many others made similar
observations -- but with the term 'schema' rather than prototype.
7. Lakoff's primary opponents are Descartes and Chomsky (who wrote
a book with the title _Cartesian Linguistics_). Many logicians,
such as Frege and Russell, were guilty of the errors cited below.
But Peirce, Whitehead, and others were not. In fact, Whitehead
explicitly disavowed the introduction that Russell had written
in the 1925 revision of the _Principia Mathematica_. ANW wrote
a letter to _Mind_ saying that he had no part in the revision,
and he did not want to have his name associated with it.
If Lakoff had focused his attack on Chomsky, I wouldn't complain.
Marvin Minsky said something similar: Chomsky's contributions from the mid 1950s to mid 1960s were extremely valuable. But linguistics would have progressed much better if Chomsky had stuck with politics instead of returning to linguistics after the Vietnam War.
> A number of familiar ideas will fall by the wayside. Here are some
> that will have to be left behind:
> - Meaning is based on truth and reference; it concerns the relationship
> between symbols and things in the world.
> - Biological species are natural kinds, defined by common essential
> - The mind is separate from, and independent of, the body.
> - Emotion has no conceptual content.
> - Grammar is a matter of pure form.
> - Reason is transcendental, in that it transcends-goes beyond-the
> way human beings, or any other kinds of beings, happen to think.
> It concerns the inferential relationships among all possible concepts
> in this universe or any other. Mathematics is a form of transcendental
> - There is a correct, God's eye view of the world-a single correct way
> of understanding what is and is not true.
> - All people think using the same conceptual system.
> These ideas have been part of the superstructure of Western
> intellectual life for two thousand years. They are tied, in one way or
> another, to the classical concept of a category. When that concept is
> left behind, the others will be too. They need to be replaced by ideas
> that are not only more accurate, but more humane.