ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Self Interest Ontology

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Richard Vines" <plessons@xxxxxxxxxxxxxxx>
Date: Sat, 25 Aug 2012 16:28:25 +1000
Message-id: <004701cd828a$d5722de0$805689a0$@netspace.net.au>

Hi Rich,

 

Personally, I think you would be advised to take your research and interests into the realm of evolutionary biology, rather than progress these ideas through an ontology forum like this.

 

I don’t want to get into any conversation about these matters, but I have outlined a few signposts for your (self) interest      …….. ??   if you wanted to follow through in your own time.

 

If I can offer suggestions they would be something like as follows:

 

First look into the broad concept of autopoiesis (self – creation)

http://en.wikipedia.org/wiki/Autopoiesis

 

Then I suggest you look at those that have endeavoured to think about this concept beyond the constructivist perspectives of Chilean biologists Humberto Maturana and Francisco Varela:

 

Bill Hall did a PhD at Harvard on these sorts of matters many years ago and if you want to look into this, I suggest  you trawl through Bill’s ideas and publications.

http://www.orgs-evolution-knowledge.net/Index/PapersandPresentations.htm

 

I myself have been interested in these sorts of matters (I think if you remember this whole topic started when I posted about my work in the Victorian Community Services sector?), which you commented on.

 

You wrote on 8/8/2011:

Dear Richard,

 

Having read your paper, I like the way you formulated the problem to be solved in terms of various groups.  In particular your quote:

 

We use the term ‘ontological’ quite deliberately in that expanded information and

meaning frameworks are generated by people. Thus, people use their innate intelligence

and sense of being to create relationships, to create meaning, and to solve problems. Such

meaning frameworks are not generated by machines but through the use of human

interpretative intelligence (Vines and Firestone, forthcoming).

 

This is an interesting formulation, though I am not familiar with the examples from Australian politics you use to illustrate the principles.  But it seems to me that self interest, widely distributed among the population, and often at odds with the commons, that should drive the system instead of regulatory bodies. 

 

One of the critical questions underpinning the very notion of what you call a “self interest ….. ontology” relates to the idea presented in the TED presentation of the brain forming some sort of regulatory function which acts to create some stability for this very notion of “self” to exist as an inter-relationship between mind and body.

 

A question is whether this idea is similar to those being proposed by evolutionary biologists and whether this basic idea extends into higher order systems like “organisations” …… and objects like ontologies. I say objects because these sorts of knowledge (ontologies) emerge as separate from the “knowing entity”. Thus, the stable notion of “self” in the way described in the TED presentation is fundamentally different to the idea of a self interest ontology, because an ontology is an object that emerges as separate from the knowing entity (people). These ideas have not been widely thought about beyond the world of the constructivist evolutionary biologists like Maturana and Varela.

 

A few of us have tried to think about these ideas in this piece (which includes a quasi - and preliminary - ontology for research knowledge as an appendix). This particular piece does not mention the idea of autopoiesis.

 

http://www.orgs-evolution-knowledge.net/Index/DocumentKMOrgTheoryPapers/VinesEtAl(2010)TextualRepresentationsKnowledgeSupport-SystemsInResearchIntensiveNetworks.pdf

 

And this is another piece that is more explicit about linking these ideas to the notion of autopoietic higher order system like organisations.

 

Vines, R., Hall, W.P. 2011. Exploring the foundations of organizational knowledge. Kororoit Institute Working Papers No. 3: 1-39.

 

These are highly contested and speculative ideas, but I have at least had fun trying to think about these matters over the years.

 

Good luck. … I found the TED video very interesting.

 

 

Richard

 

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Saturday, 25 August 2012 2:48 PM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Self Interest Ontology

 

Dear John,

 

I agree: consciousness is not the place to start, so that’s why I am treating it as an unknown variable, in structure, form and content still to be determined. 

 

It will eventually have to be further elaborated, but first I think it might be wise to concentrate on the other term, which he called paralysis

 

Paralysis means, in his rendition, has to do with loss of specific capabilities that neurosci SMEs have names for.  That use of the words is what I am trying to capture in the Self Interest Ontology. 

 

To ultimately define the whole Ontology, we first have to define the self, in the words that specialist SMEs use.  So both consciousness and paralysis eventually have to be refined in an iterative way. 

 

Paralysis has to be refined first, IMHO, because consciousness is created in part by the experiences we associate with actions and objects. 

 

Our experiences are memories of the objects we have experienced in the past, and memories of the actions we have observed, or performed by our self, or by other selfs. 

 

Those stored memories are retrieved, and they define what we can think about, and what we experience, in the future.

 

Paralysis refers to the loss of the capabilities enumerated in the vocabulary of those actions and objects which we are no longer able to perform.  So they are intensively interrelated semantically. 

 

Thus I am suggesting a top down design of the Self Interest Ontology using words which are mentioned and related to his theory of what maps into or out of what in neurophysiology terms. 

 

That way, in future work, we can still further refine those terms.  Wherever the need for further refinement exists and is justifiable, that’s where we should focus further refinement. 

 

I think he is saying that consciousness is based on sensory and motor feedback loops, and the purposes to which they are put when using the neurosci viewpoint. 

 

Clearly we can only go so far before reaching a practical limit of what its worth to us.  Consider this exploratory.  Refinement stops when no TBDs are worth the effort to further refine. 

 

That makes it a top level ontology waiting for further applications. 

 

Each application, by contrast, has words to start from the bottom of the ontology.  At some point, they refine the words in the upper ontology onto their application words to render the application. 

 

The result is a Self Interest Ontology in refinement.  We started with defining Self.  Now there is neuroscientist with a clear and elaborate explanation of what Self actually is, in physiological terms.  That is the basis of the Self Interest Ontology, as I have proposed it. 

 

Interest comes later. 

 

-Rich

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2


From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John Bottoms
Sent: Friday, August 24, 2012 6:55 PM
To: ontolog-forum@xxxxxxxxxxxxxxxx
Subject: Re: [ontolog-forum] Self Interest Ontology

 

Rich,

It certainly is an interesting topic for anyone who has passed time thinking about consciousness.

However...I am more than a little suspicious about any neuro*-work that has been turned into a music composition.

Further, consciousness is such a large topic that it touches on every bit of an ontology. It might even define the scope and domain of the ontology. That is too large a work area to attempt. A reduction in scope is in order. Don't get me wrong; I am intrigued but it reminds me of my 7th grade days when I wanted to study calculus. My mother explained that I needed algebra, trig and geometry before that would be possible.

And, as I have said before; it is nigh on to impossible to make any progress in defining a useful  ontology without a problem statement or a set of use cases. Linguists know that environment defines and shapes a language. It is true of ontologies also.

-John Bottoms
 FirstStar Systems
 Concord, MA

On 8/24/2012 8:33 PM, Rich Cooper wrote:

If there is any interest in this topic, here is a TED talk by Antonio Damasio, a neuroscientist, who correlates the self with conscious awareness of self, locates it in the brain, and describes many related thing (structure, fiber pathways, damage to specific regions …) and how those things interact with the experience of self. 

 

http://www.ted.com/talks/antonio_damasio_the_quest_to_understand_consciousness.html

 

His rendition from a neurosci perspective, is that the information sensed by each body sensor, and processed in the cortex, visual, auditory parts of the brain, is “made available to the motor cortex and the hindbrain”. 

 

That is where he locates the self, in two small regions adjacently spanning the width of the brainstem. 

 

His perspective is literate and informative and he spells out his theories of how that perspective was justified in his view. I propose that his rendition of the self is what needs to be nominated as the official

 

            SELF INTEREST ONTOLOGY

 

And I so nominate it.  Now the problem will be to codify it into assertions that can be agreed on.  It would be useful to transcribe his statements into text.  Does any agree, disagree or has quit reading object to this?

 

Damage in each one of those two regions has unique results, which he describes eloquently as “consciousness” on the one hand, and “paralysis” on the other.   So I propose this, in my own chosen form of logic:

 

type      Self =

                        Consciousness : TBD0;

                        Paralysis          : TBD1;

            ;

 

At this point, that’s all I have to contribute about how to refine the Self in each of the ways that the good professor related so well. 

 

Refinements of TBD0 and TBD1 according to his perspective would be welcome comments if anyone wants to make one.  Are there any specialists in the crowd?

 

-Rich

 

 

 

The regions seem to be paired parallel to the kidney pairing further down in the brainstem than I find comfortable. 

Together, a cross-section of the region would be the size of your neck bone’s nerve bundle.  That’s thinner than I am comfortable with, but who would have known?

 

 

Sincerely,

Rich Cooper

EnglishLogicKernel.com

Rich AT EnglishLogicKernel DOT com

9 4 9 \ 5 2 5 - 5 7 1 2





 

 


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>