ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] The "qua-entities" paradigm

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Bruce Schuman" <bruceschuman@xxxxxxx>
Date: Wed, 17 Jun 2015 08:20:00 -0700
Message-id: <008e01d0a911$1396c310$3ac44930$@net>
Very interesting, thanks all.    (01)

A whole bunch of primary questions emerging here -- very "ontological', if I 
may say...    (02)

So for me, a couple of primary guiding themes:    (03)

1) Is there any "one best way" to view this issue?  Is there a simplest or most 
parsimonious parsing of this process, in some simplest or minimalist feature 
set, that enables us to construct this 
classification/identity/similarity/difference framework?  Yes, there are 
clearly "many" ways to do this -- and those ways are not necessarily somehow 
"wrong".  But maybe (?) they are inherently limited or limiting or "localized" 
in  some way -- as suggested by their relative incommensurateness -- because 
their internal mathematics lacks a simple fluency that enables in a simplest 
and most natural way a full expression and development of all the facets of 
this process -- of which there are clearly many.  So, this issue of "simplest" 
becomes a definition for "optimizing" -- and this talk of "mandating" reflects 
an instinct for optimal simplicity and clarification.  Yes,  there are many 
ways.  But is there one best way (as defined by various criteria of "best")?    (04)

2) What is the most natural and authentic and confirmed model of general 
cognitive processing -- is there a best way to understand what human beings are 
actually doing?  Of course, the psychological literature is rich with various 
approaches.  But maybe there is a generalization of this range of ideas that 
can embrace them all (as per the famous Smith and Medin on "Categories and 
Concepts") such that we don't have to struggle with some either/or choice among 
psychological models of classification -- but can simply interpret all these 
approaches as alternative facets of a single containing framework.  See 
http://originresearch.com/docs/SmithAndMedin.docx for the first three chapters.    (05)

3) The fundamental tension in this discussion -- I would say -- is the split 
once known as "the holy war between the scruffies and the neats" -- 
https://en.wikipedia.org/wiki/Neats_vs._scruffies --  which I tend to 
generalize simply as the tension between bottom-up empirical-driven "local" 
approaches, and top-down elegance-and-simplicity-driven "global" models, which 
attempt generalization of the empirical frameworks as special cases -- and 
which, to date, as JS will remind us, have not succeeded.  To date, global 
models do not map smoothly to and "contain all" local models.  Is that fact a 
property of the universe and "the logos" or somehow inherent in any possible 
language -- or an artifact of a mathematical science that is not quite mature?  
The Wikipedia article says "Much success in AI came from combining neat and 
scruffy approaches."   I think that's the right way to see the issue today, and 
a direction worth pursuing.    (06)

REAL OBJECT - ABSTRACT OBJECT    (07)

My instinct is to insist on a couple of methodological preliminaries -- having 
to do with the nature of "objects" -- meaning "real objects" (which, as JS and 
many others have noted, have blurry and ill-defined or non-existent boundaries 
that can only be defined by context-specific motivation) -- and the "abstract 
objects" that represent those real objects in some computing medium.    (08)

Way back in the day, when I was first driving into this territory, and looking 
for ways to build a comprehensive and algebraically-integrated model of 
epistemology based on a general theory of "concepts", I came across the book 
Programming Languages, Information Structures, and Machine Organization, by 
Peter Wegner (1968), then a textbook in computer science at UC San Diego, and, 
I believe, his PhD thesis.  After banging through that rather amazing and 
comprehensive book as best I could, I became convinced that the most reasonable 
way to understand "concepts" is as "information structures".  That's the right 
way to understand "what a concept is" -- and the right way to construct an 
"abstract object".  So -- everything we are going to do to build a model of the 
world is going to be optimally defined, in the least confusing way, with the 
fewest number of intervening levels and layers -- by defining absolutely every 
element of our definition system as "information structures" -- beginning with 
bits and bytes, and expanding to things like rows and columns ("row vectors") 
-- and always "linearly optimizing" the mapping of this abstraction hierarchy 
to the physical ground of the "machine space" -- the actual electronics.  "Zero 
distance" as the optimizing variable.  "Make a perfect linear map."  Abstract 
concepts like "similarity" or "difference" or "identity" -- or "analogy" or 
"comparison" -- are going to be defined in these terms.  Always, we are 
discussing the properties of abstract models.    (09)

I had been looking at the entire issue of "what is a modeling language?" -- and 
how can these "bony structured concepts" (these abstract linear objects called 
"concepts") form a smoothly isomorphic map of a continuous reality (for 
example: fluid dynamics and the flow of liquids)?  And what does the "hierarchy 
of computer languages" have to do with this process, if anything (ie, 
high-level languages are defined in terms of lower-level languages, with 
intervening "macros" that assemble the hierarchy of high-level functions)?  The 
way it looks to me, bottom-level machine code is the "atomic" level of the 
coding hierarchy, and the correct or optimal ground for any attempt to build an 
accurate model of continuous variation in a digital medium.  So, to approach 
continuous variation in the model, in its machine representation maintain a 
smooth code mapping directly to machine code -- to the "zeroes and ones".    (010)

For perfection and minimization, I want to remove any "knots" or weak/confusing 
or ambiguous "homomorphic" maps in the intervening levels of this language 
hierarchy.  I don't want any definitions that are not rigidly/perfectly mapped 
to the lower level language, if possible -- so that the entire cascade of 
abstract representations, from machine code to the highest level composite 
macros, is smoothly and directly mapped.  No bumps along the way and no 
interpretive ambiguity or uncertainty.  Perfect that process, and pull out all 
the weeds.  We want high-level composite abstractions -- with absolutely smooth 
and "near-continuous" maps directly to the ground of finite-state on/off 
machine code.    (011)

THE ABSOLUTE PRIMACY OF MACHINE REPRESENATION    (012)

This approach gives an absolute primacy in the modeling process to machine 
representation.  It says -- yes, we start with observation -- but our target is 
perfect machine representation -- so don't compromise that method, or your 
system will get "dirty" -- and if it does, your logic flow will get jammed up 
-- and you will be forced to build a world of ad-hoc special-case local systems 
that can't talk to each other....    (013)

> It's irrelevant how you represent the properties or what conventions 
> you adopt for storing information about them.
> 
> You still have to observe the patterns before you can *infer* whether 
> or not they determine a unique item.    (014)

Yes -- but your "observed patterns" ARE a "model" -- constructed in terms of a 
presumed underlying language -- so even your "objective perception" is an 
"inference".  So -- I would take an opposite view -- and say it is essential to 
discipline the representations of the perceived properties (carefully choose 
the way you define those properties) and your conventions for storing 
information.  If you introduce muddiness or ambiguity at this point in the 
modeling/representation process, everything that follows risks muddiness...    (015)

Bruce Schuman, Santa Barbara
http://networknation.net/global/vision.cfm    (016)




-----Original Message-----
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx 
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Alexander Titov
Sent: Wednesday, June 17, 2015 3:42 AM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] The "qua-entities" paradigm    (017)

An identity - as a specific classification of a thing in a such way that the 
correspondent class has one and only one member?    (018)

And I think that an identity seriously depends on a viewpoint and an observer 
who/which makes that identity ‘classification’.    (019)

Regards,
Alex
> On 17 Jun 2015, at 10:25, Matthew West <dr.matthew.west@xxxxxxxxx> wrote:
> 
> Dear Rich,
> Each grain of sand exists in the real world and has identity, whether 
> or not you are interested in them. That is something entirely 
> different. A handful of sand is also something that exists in the real 
> world (the aggregate of the grains of sand whilst they are in your 
> hand) and whether you care about that is also a different question.
> 
> Regards
> 
> Matthew West                            
> Information  Junction
> Mobile: +44 750 3385279
> Skype: dr.matthew.west
> matthew.west@xxxxxxxxxxxxxxxxxxxxxxxxx
> http://www.informationjunction.co.uk/
> https://www.matthew-west.org.uk/
> This email originates from Information Junction Ltd. Registered in 
> England and Wales No. 6632177.
> Registered office: 8 Ennismore Close, Letchworth Garden City, 
> Hertfordshire,
> SG6 2SU.
> 
> 
> 
> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
> [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich 
> Cooper
> Sent: 17 June 2015 06:49
> To: '[ontolog-forum] '
> Subject: Re: [ontolog-forum] The "qua-entities" paradigm
> 
> Are you saying that identity must *always* be *unique*?  I can 
> identify a handful of sand at the beach without assigning an identity to each 
>grain.
> All grains look the same to me, therefore all sand has the same 
> identity, so I treat it as a unitless object, and the best I can do to 
> subdivide it is to organize it into specific volumes, weights and prices.
> 
> Sincerely,
> Rich Cooper,
> 
> Chief Technology Officer,
> MetaSemantics Corporation
> MetaSemantics AT EnglishLogicKernel DOT com ( 9 4 9 ) 5 2 5-5 7 1 2 
> http://www.EnglishLogicKernel.com
> 
> 
> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
> [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F 
> Sowa
> Sent: Tuesday, June 16, 2015 10:30 PM
> To: ontolog-forum@xxxxxxxxxxxxxxxx
> Subject: Re: [ontolog-forum] The "qua-entities" paradigm
> 
> On 6/17/2015 1:12 AM, Rich Cooper wrote:
>> you could say that the ID is the concatenated value of all
> properties
> 
> I was trying to explain that similarity is observable, but identity is 
> always an inference.
> 
> It's irrelevant how you represent the properties or what conventions 
> you adopt for storing information about them.
> 
> You still have to observe the patterns before you can *infer* whether 
> or not they determine a unique item.
> 
> John
> 
>     (020)


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (021)

<Prev in Thread] Current Thread [Next in Thread>