[Top] [All Lists]

Re: [ontolog-forum] George Lakoff - Women, Fire, Dangerous Things - Embo

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Bruce Schuman" <bruceschuman@xxxxxxx>
Date: Thu, 24 Jul 2014 13:15:03 -0700
Message-id: <008a01cfa77b$f46a31d0$dd3e9570$@net>

Dear Dr. Sharma – thanks for your comment and questions –


Yesterday, after I posted my message, I composed another brief message taking a rather radical view, expressed in a rather impudent way…


Following George Lakoff’s description of linguistic categories as involving the question “what kind of a thing is this?” (and that therefore the subject of “kinds” becomes a critical issue) – I wrote the sentence


“There are no kinds”


The rest of my little meditation-note elaborated on that idea – and what I perceive to be the often-ruinous consequences of presuming that categories have some absolute ontological status. “That there IS such a thing as that kind…..”


From my point of view – every “distinguishable object”  in reality (and determining the boundaries of that object might not be so simple – so as to say “this is the object, this is not the object”) is absolutely unique.


Yes , there are “features of similarity” (to cite the title of a famous article in cognitive science) – but are any two “things” absolutely identical – absolutely identical in absolutely all discernible dimensions of measurement?  Or do we settle for some variation, and still call them identical -- ?


Objects are described in terms of attributes, and values in those attributes.  Any object can be described in an infinite number of attributes – or as many as our level of analysis or perception enables us to distinguish.   Those attributes are often value ranges, that can be described in decimal places of measurement.  The more decimal places, the more accurate the description.


We might be able to derive this entire doctrine from John Sowa’s “reality is continuous, concepts are discrete” postulate.  I like the idea of “bounded ranges” – that objects are defined by an n-dimensional envelope of their attributes.  Objects measured within those bounded ranges are “that kind” of object.  If some attribute exceeds the bounded range, that object is no longer “that kind…”


This is all simple, of course – but it points to a gaping source of potential error in human understanding. The higher the level of abstraction – and therefore, the higher the degree of implicit (undefined/non-explicit) dimensionality inherent in the abstraction – the greater the potential for disagreeing on whether the object is “that kind”.


This is the battle of civilization in a nutshell.  We have blood all over our collective windshield directly due to this chasm in our ontology of language.


We have to stop presuming “kinds” – and get into exact high-dimensional particulars.  If we need to construct a theory of kinds (and no doubt we do), that model should be constructed with algebraic determinism and an explicit awareness of “acceptable error tolerance”. 


Reality is continuous, concepts are discrete (with substantial error ranges inherent in their definition), and “you can’t step into the same river twice”.


We need an ontology of language that clarifies this issue.  We need explicit definition of the cascade of interpretation across levels of abstraction, from broad generalizations and “kinds” to the particular instances we believe we have categorized.


All of our civil law takes this form.  “Guilt” and “innocence” are bounded ranges in a system of categories.  Is “Hamas” a “terrorist organization”?




Yes, it is true that “most animals agree on the danger of a forest fire” and run away.   Maybe it is the concrete particular aspects of fire and not its abstract category that motivates this common response.


(It took Helen Keller many particular experiences of “water” before she was able to discern and cognize the common abstract category w-a-t-e-r – a huge break-through discovery for her)


I think what I would like to see – would be the emergence of a general model of categorical structure that mapped the relationship of “the one” (the “container of all possible categories”) to the endless diversity of special cases and particular instances.  This connection would take the form of centered (or “accurate”) interpretation across levels of abstraction – so that particular cases and instances could always be categorized entirely on the basis of absolutely local particular data – and not on broad generalizations that presume “kinds” – unless it is absolutely clear that the generalization holds in all cases.


This would be a universal guideline for collective human understanding.  It would provide correction of all interpretive distortion across levels, of all “bias”, of all “stereotype” (inaccurately presuming that generalizations hold in particular cases).


In a world society moving with great evolutionary urgency towards globalization, we need this absolute map, defined in universal (and “potentially continuously variable”) terms, that reflects all possible ways to parse the space of reality into labels and names and categories and distinctions, in a form that “interprets all natural languages” – and guides a correct (because balanced, comprehensively inclusive, and un-biased) interpretation.




And yes, of course, obviously – this is a huge and audacious and entirely “revolutionary” call, and no doubt controversial in a thousand ways – and without proving its possibility, its feasibility very much in question.


The Lakoff book from 30 years ago does provide a rich review of these issues – perhaps a little messy, perhaps blurring the boundaries between a philosophy of pure mathematics and a psychology of human aberrance – but he cites the issues and gets the questions on the table.


Relative to this discussion, I think – is another little text from another angle, this one deeply intuitive and popularizing – the second chapter of integral philosopher Ken Wilber’s 1979 book “No Boundary”.  Chapter 2 is entitled “Half of It”, and deals with duality and the nature of “opposites” – and how bounded categories came into being – and why, wherever those boundaries are drawn, the human battle begins….





And there’s this – the dimensionality of similarity




Bruce Schuman

(805) 966-9515, PO Box 23346, Santa Barbara CA 93101


From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Ravi Sharma
Sent: Wednesday, July 23, 2014 11:45 AM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] George Lakoff - Women, Fire, Dangerous Things - Embodied Reason




Wishing you the best in your journey, we will provide constructive inputs, it seems you welcome them.


My "primitives" related Q is on 'reality' my understanding and knowledge is very limited in physics and meta-science or philosophy -


·         Reality requires a cognitive entity (such as person) and another entity (subject of reality) - and yes reality is continuous as John Sowa mentions it. It is also relative as dogs perceive sounds differently than humans and extent of cognition depends on senses.

  • Reality is more universal - many or most would agree or concur as basically acceptable - does that imply collective cognition of the species - for example most physicists might agree on Newton and Einstein's description of gravity as example of reality. Most animals agree on danger of forest fire and run away.

·         Reality is not definable as what we perceive is always dependent on extent of tuning of senses and sense-mind connectivity at the time of contemplation of the subject of reality – example from physics is any telescope’s image is not real as the light from objects originated at different times and many objects would be no more if observed at today’s earth clock time.




'''Dr. Ravi Sharma'''

Senior Enterprise Architect

Germantown, MD 20874

email: ravisharma-at-comcast.net or drravisharma-at-gmail.com

Dr. Ravi Sharma is a Senior Enterprise Architect. Dr. Ravi Sharma is an industrial entrepreneur, theoretical nuclear and particle physicist, human space systems scientist, fuel cell and hydrogen technologies specialist, satellite remote sensing and image data systems specialist, an Enterprise Architect with SOA, BPM and modeling expertise and has several decades of computer systems and peripheral manufacturing and IT systems development experience.

Recently he has been contributing and participating on this Wiki and ONTOLOG forum, its Open Ontology Repository activities, also to standards bodies such as Ontology Definition Metamodel(ODM) Task Force and Date-Time Foundation Vocabulary at OMG and also has been Co-chair on Open Ontology Repository Architecture Topic during NIST Ontology Summit 2008.

He has recently been also working As System Design Architect for Catapult Technology on [[DoD]] Joint Staff Project - Joint Lessons Learned Information System (JLLIS) that includes multiple SQL Server databases and Meaning Based Computing Semantic Tool (Autonomy) evolving towards a Knowledge Management and Knowledge Sharing System across multiple federal agencies.

Earlier he worked at DOE as Contrator for Office of Science and also as a contractor Ontologist for Joint Program Development Office (JPDO) where NASA, FAA, Air-Force (DOD) and NOAA are collaborating on implementing [[NextGen]] Airspace Management Systems. He also worked As Contrator for Joint Medical Logistice Functional Design Center on Defense Medical Logistics Information System (DMLSS) for DOD.  and at State of Delaware as Consultant Contractor and Enterprise Architect for Dept. of Technology and Information.

He has held significant positions in Academic, Government, Industry and R&D Organizations in US and India. He is a Voting Member of the Fuel Cell Vehicles Technical Standards Committee of SAE and has participated in fuel cell safety and interface standards developments for 5 years.

He has received NASA Apollo Achievement Award. He was Member of Technical Staff at Bell Labs AT&T for 5 years and a GM Professional Fellow and Enterprise Architect for 5 years including one year as Capital Planning Group as credit risk analyst at GMAC. For NASA Apollo, Skylab Programs he helped select, analyze build and design space and ground systems relating to remote sensing and spacecraft contamination analysis as well as work on the Space Station and Space Shuttle and Landsat, EOS (Terra, Aqua, Aura and NPOESS) satellites and ground systems.

He has worked on contracts for NASA (HQ & Goddard Space Flight Center), HUD, BLS, Treasury-BEP, Fannie Mae and the State of Delaware.

He helped Shape the Indian Space Program as Scientific Secretary, Indian Space Research Organization (ISRO), established the National Remote Sensing Agency in India at Hyderabad, developed major program strategies and technically lead them, and received numerous ISRO Awards. He has lead collaboration programs nationally and internationally such as twice being Alternate Delegate from India on the UN Outer Space Committee.

He has been on the Board of Directors of American Society of Engineers of India Origin (ASEI National and Capital Chapter) for 8 years and has been a Distinguished Fellow for Life, IETE and held C-Level positions in industry. He has publications in Nuclear and Space journals and also 50+ Technical Reports, Industry Proposals and Program Documents Strategies and White papers. He is Member, Secretary and also Chair of Souvenir Committee of US Indian American Chamber of Commerce.

He received his B.Sc. from Rajasthan University and M. Sc. From Royal Institute of Science, Bombay University India and came to US in 1963, received Ph. D. at University of Florida, and carried out Post-doctoral research at Yale University. He has held teaching and research positions at Michigan, Temple, Yale, and Florida Universities in the US and also at Indian Institute of Science. He is listed in Who-is-Who in the US.

He is also active in learning Sanskrit and Vedas and science content in them. His future interests include Ontology, Semantic tools and applications, information sciences, energy and fuel cells, astrophysics, science policy and development.


On Wed, Jul 23, 2014 at 10:39 AM, Bruce Schuman <bruceschuman@xxxxxxx> wrote:

Thanks to all for the comments.  This is a big subject -- and maybe not always of interest to technical professionals.

 But for me -- maybe it's the summer heat out here in Santa Barbara -- I am feeling the push for "big picture integration" -- so the comments and additional citations are helpful.

 On the Lakoff .pdf -- hoping that Lakoff might, if he knew about it, find it interesting that some people are taking a close look at his 30-year-old book -- perhaps the crowning fruit of his years of technical analysis -- I broke the 631-page document into 100-page pieces and I'm using my Adobe account to translate them into word.doc format.  For me, that makes it possible to seriously study the work and bring its points and features into focus around whatever new ideas might be emerging.

 And for me, that might start with a simple survey approach to these various theories and models of classification that are cited by scholars.  For me, as it turned out, I first heard about these general categories through John Sowa's Conceptual Structures , where much of the discussion hinges on an influential book of that time by Smith and Medin entitled Categories and Concepts.  The first section of that book, that reviews these major areas, is available as a .pdf, and a few months ago, I scanned it into a word.docx, which is online here


 With my newly emerging word.doc version of Lakoff, I am starting to gather up the major approaches to categorization, which includes the approaches reviewed by Smith and Medin (and John Sowa), but adds some additional categories and approaches – maybe not entirely different, maybe minor sub-sets – but still illuminating interesting points and facets.

 For me – this entire field, scattered and divided across these multiple and apparently conflicting approaches, can be seen as ripe for a powerful new integral approach – essentially based on the classical approach, but recognizing its weaknesses (as outlined in many places, including Smith and Medin, and Lakoff) – and framing the entire undertaking in a new light, based on a few simple basic underlying philosophical principles – or, if you prefer – heuristics.  This new approach has to assimilate and account for “the data” that appears to substantiate prototypical or “fuzzy” or empirical approaches – but do so in a way that liberates even these very useful perspectives from any limitations or narrowness or incommensurateness inherent in their presumptions.  Can all of these perspectives, and all of their observed data, be hooked together into one interpretative framework?

 This is not so easy to do.  Lakoff comments on how huge his project was, how comprehensively inclusive it had to be, how many voices and contributors were involved, etc.  And he was operating in the very nurturing context of UC Berkeley in the 1970’s and 1980’s, where there was a lot of creativity and interest and support.  So, I am little hesitant to predict anything that might emerge for me.  But there are a few big-picture points that I believe significantly reframe this entire discussion – and open explosive new power, in ways that are simply not available within the framework of any of these various alternative approaches described by scholars – due to limitations inherent in all of them, including the classical.

 Major themes – that I believe seriously and absolutely reframe the entire discussion – and may help render it much more “scientific”:

 1)      The concept of “primitive” must be redefined.  A “primitive” cannot be a highly composite object.  It must be the fundamental element(s) from which these composite objects are constructed.  The clear example from computer science is the concept of “bit”, or “0,1”.  Most “primitives” that we encounter in “empirical linguistics” are highly composite – embedding many inherent/implicit nested sub-distinctions.  Though popular and widely accepted as natural, this is very misleading, and destroys any potential for a powerful universal mathematics of language.  To establish the potential for absolute coherence in language, every concept in reality must be built up as a composite assembly of distinctions as simple and primitive as “0,1”.  Perhaps this concept of distinction as fundamental was first popularized by G. Spencer Brown in “Laws of Form” (1969).  I would argue that “concepts are (or can be seen as) nests of distinctions”.  Computer science gives us a soundly “constructivist” approach to defining these abstractions.

2)      As John Sowa has emphasized – “reality is continuous, concepts are discrete”.  Misunderstanding this principle is a huge source of error in concept formation, and floods of confused claims emerge from assumptions on this subject – including much of the so-called “classical” view as resisted by Lakoff.  A universal theory of concepts must be based on a recognition of “potentially continuous variability” in what amounts to a “potentially infinite number of variables” or dimensions.  Reality can be parsed in an infinite number of ways, depending on intention and cultural mores – and “cultural diversity” around the world and throughout history seems to illustrated this point very clearly. “Slice it any way you want…”

3)      Within the framework of these basic guiding assumptions – the concept of meaning as an empirically-observable property of human behavior can be blown into the next octave by locating the source of meaning in immediate and local human intention, rather than in some widely accepted or empirical/statistical pool of word meanings (“six million word senses”).  Words mean not only what the speaker intends them to mean (see Lewis Carroll - http://www.goodreads.com/quotes/12608-when-i-use-a-word-humpty-dumpty-said-in-rather ) but that particular meaning is absolutely customized in the context of any particular act of communication.  This is not meant as an argument against the empirical approach – which is absolutely necessary if human beings are to understand one another – since of course, when using language we must draw on a common pool of shared understanding – in a loose kind of way that gets us “close enough for the current purposes” (“to within acceptable error tolerances”).  But this new approach locates absolute intended meaning in “stipulation”, and puts control of all exact word-meaning in the hands of the person using the word, in an immediate local context (‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all’).

4)      All of this is very hierarchical – and must be seen as an “interpretation” of semantic phenomena.  It’s a way – a heuristic way – to order and interpret the data of cognitive experience.  So, the claim that this approach is “better” or “the best” must be buttressed by claims of simplicity and elegance and practical and inclusive universality.  A hierarchy is a human construction.  Can such a construction have absolute ontological primacy?  Could such a construction refute Lakoff’s argument that mathematics is not inherent in the universe?  The best answer might be – “Maybe.  Prove it to me….”


I have not yet had a chance to perfect or tune this argument – and maybe a review of Lakoff will help with this – but I would say that this concept – that I tend to call “ad-hoc top-down decomposition” (each word is assigned a stipulated meaning by the speaker in an ad hoc constantly fluctuating locally/immediately specified way) – is the key to overcoming the weaknesses of the classical method – and addressing much or all of the observable cognitive phenomena that Lakoff attributes to “embodied mind”.


Along the path of his Berkeley revolution – he and Rosch did help break us out of inherent and blind categorical rigidity and towards a humanly-configurable immediately-located approach to categories (don’t think in terms of stereotypes when asking “what kind of a thing is this?”) – but in so doing, he tends to enshrine an empiricism that in the end tends to blind the inquiry and limit the implications of the model.


All that empirical phenomena, I want to claim, in all its vast teeming highly-detailed empirical diversity, can be modeled and described in absolute perfection to x-number of decimal places by an ad hoc top-down approach – that is not bound by cultural assumptions regarding “what kind of a thing this is” – or the probabilistic empirical statistics regarding whether a penguin is really a bird.


And the ad-hoc top-down approach, by virtue of its absolute simple primitive construction – where every distinction is a decimal-place value in an intentionally-stipulated dimensional cascade – is absolutely commensurate with the basic techniques and methods of computer science.


“What is beauty?”


We don’t need to endlessly and bottomlessly debate it.  All we need to do is ask you what you meant by the term when you used it, and you can tell us what you meant in all the precise specificity you can muster.  In the context of that immediate local act of communication, you are the authority, and the word means what you intend it to mean.




Ok, big claims, maybe a little wild.  Let’s see whether I get anywhere with Lakoff, and thanks for this discussion.


Longer version of the Lewis Carroll quote on “who is to be master”




- Bruce


Bruce Schuman

(805) 966-9515, PO Box 23346, Santa Barbara CA 93101


-----Original Message-----
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F Sowa

Sent: Tuesday, July 22, 2014 8:21 AM
To: ontolog-forum@xxxxxxxxxxxxxxxx

Subject: Re: [ontolog-forum] George Lakoff - Women, Fire, Dangerous Things - Embodied Reason


Bruce, Rich, and Chris,



> It's a powerful sophisticated highly detailed and substantial book

> -- and the entire 631 pages are available in a pretty good .pdf...


Thanks for the URL.  I agree that it's an important book.  I bought it shortly after it came out, but I'm glad to have an electronic copy.


General observation about George Lakoff:  I have a large overlap of agreement with most of his conclusions, especially on metaphors, word meanings, the relationships between syntax and semantics, and the nature of the embodied mind.


But his history of ideas is almost always *spectacularly* wrong.  See the excerpt below from p. 9 of the book.  I agree that every one of those points is false or at least misleading.  But every one of them was debated and rejected by some Western philosophers since the Greeks.



> I haven’t yet read Lakoff [philosophy] in the flesh...


That's another good book that suffers from the same historical flaws.

I said that in my review:  http://www.jfsowa.com/pubs/lakoff.htm



> the book makes a persuasive case that prototype theory is a good model

> for how humans categorize things in their world.


I agree.  So did Wittgenstein.  Lakoff cited Rosch, and he mentioned Wittgenstein.  Rosch wrote her PhD dissertation on using Wittgenstein's theory of family resemblance.  But related ideas were very widely proposed, analyzed, and debated since the ancient Greeks.


William Whewell made a strong case for prototypes in biology in 1858 (but he did not use the prefix 'proto').  Kant used the word 'schema', which was widely used in psychology by Selz, Piaget, Bartlett, etc.

Another term is Gestalt.  Unfortunately, Lakoff's citation for 'schema'

is Rumelhart, 1975.


Peirce had read Whewell and Kant.  He said that the notion of schema in Kant was his single most important notion, which Kant should have made the centerpiece of his Critiques.  Otto Selz was a psychologist who did make the schema his central focus.  Herb Simon cited Selz's notion of schematic anticipation as a predecessor and inspiration for his theory of chunks and pattern-directed search in AI.



> Current methods rely on domain experts or knowledge engineers

> abstracting a variety of observations into a system of axioms that can

> be used downstream for deductive reasoning. This can lead to rigidity,

> bottlenecks, etc.


I agree.  Such methods are valuable for solving particular problems.

They correspond to the microtheories in Cyc.  But they are far too limited and brittle to put in a top-level ontology.  For citations and discussion, see http://www.jfsowa.com/pubs/cogcat.htm


Some corrections to Lakoff's history:


  1. Pythagoras and Plato had a theory of a detached or at least

     a detachable psyche.  Pythagoras had a notion of migration of

     souls (which he probably picked up from Eastern philosophy).

     Both Heraclitus and Pythagoras lived in Anatolia, where they

     undoubtedly got ideas from the gurus who traveled the silk road

     from China to the Greek colonies.


  2. But Aristotle had a hierarchy of *embodied* psyches, which were

     not detachable.  They ranged a from vegetative psyche for plants

     to more complex psyches for animals from sponges, to worms, to

     mammals, to humans.  By the way, Aristotle was the first person

     to recognize that sponges were animals, not plants.


  3. The great Christian theologian Thomas Aquinas was a good

     Aristotelian.  He used Aristotle's theory as a basis for

     explaining the dogma of the resurrection of the dead at the

     end of the world:  the human soul without a body is pale

     shadow (as Homer said in his description of Hades) and the

     soul requires the body to support all its faculties.


  4. The Greek atomists, starting with Leucippus and Democritus, had

     a different view, but it was also embodied.  They assumed atoms

     of different shapes for the four elements (earth, fire, air,

     and water).  They assumed that the psyche consisted of spherical

     atoms, because they were more penetrating.  The atoms of the

     psyche swirled around and thereby directed the motions of the

     other atoms of the body.  (If you relate the psyche atoms to

     modern theories of the electron, that's not a bad summary.)


  5. The mind-body problem was invented by Descartes.  It was a huge

     source of confusion that the Greeks never suffered from.  Many

     philosophers, such as Peirce and Whitehead, had read Aristotle,

     and they argued for a continuum of psychological (or mind-like)

     phenomena from the lowest level to the human (and perhaps beyond).


  6. The theory of prototypes was well established by Aristotle in

     his biological writings.  His logical writings were the source

     of the theory of categories that Lakoff criticized.  But in his

     more voluminous biological writings, Aristole argued for a

     bottom-up theory of analysis based on *prototypes* rather than

     top-down definitions.  He explicitly said that any definition

     of species or genera must be based on a detailed description

     of a specimen, and that the definitions must *change* when

     new discoveries are made.  Kant and many others made similar

     observations -- but with the term 'schema' rather than prototype.


  7. Lakoff's primary opponents are Descartes and Chomsky (who wrote

     a book with the title _Cartesian Linguistics_).  Many logicians,

     such as Frege and Russell, were guilty of the errors cited below.

     But Peirce, Whitehead, and others were not.  In fact, Whitehead

     explicitly disavowed the introduction that Russell had written

     in the 1925 revision of the _Principia Mathematica_.  ANW wrote

     a letter to _Mind_ saying that he had no part in the revision,

     and he did not want to have his name associated with it.


If Lakoff had focused his attack on Chomsky, I wouldn't complain.

Marvin Minsky said something similar:  Chomsky's contributions from the mid 1950s to mid 1960s were extremely valuable.  But linguistics would have progressed much better if Chomsky had stuck with politics instead of returning to linguistics after the Vietnam War.





> A number of familiar ideas will fall by the wayside. Here are some

> that will have to be  left behind:

> - Meaning is based on truth and reference; it concerns the relationship

>   between symbols and things in the world.

> - Biological species are natural kinds, defined by common essential

>   properties.

> - The mind is separate from, and independent of, the body.

> - Emotion has no conceptual content.

> - Grammar is a matter of pure form.

> - Reason is transcendental, in that it transcends-goes beyond-the

>   way human beings, or any other kinds of beings, happen to think.

>   It concerns the inferential relationships among all possible concepts

>   in this universe or any other. Mathematics is a form of transcendental

>   reason.

> - There is a correct, God's eye view of the world-a single correct way

>   of understanding what is and is not true.

> - All people think using the same conceptual system.

> These ideas have been part of the superstructure of Western

> intellectual life for two thousand years. They are tied, in one way or

> another, to the classical concept of a category. When that concept is

> left behind, the others will be too. They need to be replaced by ideas

> that are not only more accurate, but more humane.


Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J

(Dr. Ravi Sharma)
313 204 1740 Mobile

Attachment: TverskyFeaturesOfSimilarity.PNG
Description: PNG image

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>