ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] The "qua-entities" paradigm

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Bruce Schuman" <bruceschuman@xxxxxxx>
Date: Sun, 14 Jun 2015 18:24:14 -0700
Message-id: <000601d0a709$fd6d7e20$f8487a60$@net>

The following is a very ambitious Sunday afternoon meditation and inquiry on something like “the linear optimization of semantic ontology”, and just seemed to happen.  Give it a skip if it’s just too fuzzy or wild for your taste.

 

***

 

Thanks for this below message.  As I continue to saturate myself in these articles -- and continue to work on my own very broadly-motivated project, which I call a "collaborative backbone" (intended to help interconnect disciplines in ways that support broadly informed social-change activism) -- I keep finding myself drawn back to the urgency of simplicity.

 

I've been meditating on this below JWS message for the past two hours, looking at various articles cited, and am feeling a flood of interesting and excited but still inadequate and confused ideas.

 

I want to somehow derive something very simple and primal, probably based on the point at the bottom of the message:

 

> In science and engineering, identity is *never* observable.

> Similarity is observable, and identity is *always* an inference.

 

But in what way, and how constructed?

 

For me -- the big issue is: what is the relationship between a "real object" and an "abstract object" (a symbolic construction that represents the real object, in selected dimensions/aspects that are thought to be significant in the present context).  The science is: how is that correlation established (how is the reality represented – “in what dimensions?”), and how is it tested and confirmed (how accurate and useful is the model?).  “Identity” is a function of that definition, and “similarity” can/should also be indexed in this way, once dimensions are stipulated and their values established.

 

So, I want to find the simplest and most generic possible way to build up the abstract symbolic model.  And to do this -- I think it makes sense to follow the primal language of computers -- bits and bytes -- and a sense of primal distinction -- on/off, 1/0 -- from which, in composite ways, "everything" is constructed.  So, what is the simplest possible way to do this?

 

A few days ago, I was looking at the definition of a set -- which for me too often appears confusing and "non-scientific" -- which makes me want to push for a "constructivist" approach to fundamental definitions (supposedly as per Kronecker):

 

http://www.jfsowa.com/logic/math.htm

 

"Elementary or "naive" set theory is used to define basic mathematical structures. A set is an arbitrary collection of elements, which may be real or imaginary, physical or abstract. In mathematics, sets are usually composed of abstract things like numbers and points, but one can also talk about sets of apples, oranges, people, or canaries. In computer science, sets are composed of bits, bytes, pointers, and blocks of storage. In many applications, the elements are never defined, but are left as abstractions that could be represented in many different ways in the human brain, on a piece of paper, or in computer storage."

 

This phrase seems key: "In many applications, the elements are never defined, but are left as abstractions that could be represented in many different ways in the human brain, on a piece of paper, or in computer storage."

 

I get nervous when I see the common acceptance of the notion that set theory makes sense when we do it in our heads or on a piece of paper.  It looks to me like Russell's Paradox (having to do with "sets that are members of themselves") is an artifact of that presumption – probably one of many.  If is a set is a "container" -- as one might intuitively suppose and as it is defined in C++ here:  http://www.cplusplus.com/reference/set/set/ -- then it seems the idea of “a container that contains itself” is something bordering on nonsense (“does a sphere contain itself?”)

 

So, I am inclined to say, let’s stick to the third method and define sets and similar algebraic objects in terms of “constructivist” definitions, based on a direct “machine instantiation” – probably in a language as close to machine code as possible, or maybe identical to it. No composite vague confusing intervening elements (“primitives that are not primitive”), with complex artifactual definitions (confusion-inducing and perhaps unnecessary secondary definitions mandated by our over-complex and “non-primitive” initial choice of symbolic primitives).

 

I haven’t seen this article, but the title is suggestive: Wang, Hao (1960) “Toward mechanical mathematics,” IBM Journal of Research and Development 4, pp. 2-22. http://www.research.ibm.com/journal/rd/041/ibmrd0401B.pdf (cited here: http://www.jfsowa.com/pubs/fflogic.pdf )  Maybe Wolfram has built this kind of math.

 

I’d say that a bit is something like the lowest-level possible distinction in a hierarchy of constructed definitions that build “everything” represented as an abstraction –and does so “with absolute fluency”.  So, I think there is a direct analogy with the continuum and lowest-level measurement.   Bit-based distinctions somehow pop up out of the continuum, and wrap around the real object to form its abstract definition in symbolic representation.

 

What I want to see happen – is a constructivist process that builds up every possible description in terms of an absolute minimalist set of options – like bits – that are cascaded into increasingly composite and abstract structures – such as the examples suggested above – apples, oranges, people, canaries – or airplanes, passengers, ships, large and small animals – with all epistemological or logical concepts like “comparison” or similarity or difference or analogy strictly defined in these terms -- with all of this more or less generally following the fundamental definition of taxonomy as defined by Linnaeus, in what I think is a simple and potent generalization of Aristotle on genus and differentia:

 

“All the real knowledge which we possess depends on
methods by which we distinguish the similar from the
dissimilar. The greater the number of natural distinctions
this method comprehends the clearer becomes our idea of
things. The more numerous the objects which employ our
attention the more difficult it becomes to form such a
method, and the more necessary.”

 

So, the instinct for “universal generalization of symbolic representation” goes towards a uniform system for representing distinction, similarity and identity – all defined in the absolute simplest/minimalist way.

 

For me, the reasonable and naturally intuitive way to do this – is to look at all forms of comparison – and define them all in terms of exact dimensional measurement in stipulated dimensions (what is a dimension, what is a value?) – which, I think, can be perfectly represented with no error or complexity or “round-off” through a hierarchical assembly of bits.

 

This is the instinct.  It’s 100% linear, 100% recursive, based on one zero-ambiguity primitive element – distinction or difference as represented by a bit (0/1), with a mapping from the abstract object to the real object.

 

I am attracted to the notion that there are no such thing as “kinds” (or “qualities” or “types”) – not in any absolute way, because these concepts are artifactual – or secondary or derivative – and they introduce incommensurate complexities into an taxonomic cascade that could be better and more accurately and more reliably constructed simply from 100% linear elements.  “Every quality is artifactual because it is a stipulated abstract composite that can be more accurately defined in quantitative terms – so, for convenience, continue to use its abbreviation, but understand that its internal structure is linearly composite.”

 

In actual practice and in daily usage, we need these abstract things called “qualities” because they are convenient and economical and we need simple/brief names for complex ideas (“beauty” or “mammal”).  But we get confused and forget that these terms are mere heuristic conveniences and labels for inherently complex/composite (and very likely controversial) abstractions – they are ungrounded social conventions with no absolute definition -- and we start assigning these inherently composite elements some kind of ontological primacy – and as a result – our mathematics becomes impossibly muddy (and so does our politics and our social life).  We should stop doing this, and reform this tendency from the ground up, throwing out all secondary derivative definitions and the mythology thereof.

 

I want to mandate an absolute fundamental linearization over all symbolic construction of abstractions, from the ground up – the “ground” being the continuum.  I’d say this idea does rather resonate with the instincts of the Vienna Circle with their insistence that any term with no unambiguous mapping to the empirical plane is “unreal”.  I do believe in high-level abstract intuition – but I also want to insist that it be solidly grounded in unambiguous linear definition chains.

 

This concept is a key to the optimization and generalization of semantic ontology.

 

All comparison – all assertion of the identity of two objects, all appearance of similarity, and all difference – can/should be mapped in this way, as a universal ultimately-simple standard that wipes out the endless introduction of artifactual variables that confuse and “de-universalize” any semantics.

 

Objects are identical if they are defined in the same dimensions and have the same values in those dimensions.  Objects are “similar” or “different” to the degree that there are any discernible variations in either their dimensions of definition or their values in those dimensions.

 

Why is this approach fully expressively of absolutely anything and infinitely fluent?  Because it is 100% constructed from minimalist elements that can “build any possible information structure”.  All possible YAFOs can be constructed through this method.  Abstract dimensional representation and machine code are isomorphically mapped.

 

*

 

This is a very ambitious work-in-progress, and is something of an interdisciplinary “paint by numbers” that needs a lot of filling in:

 

Collaborative backbone: http://networknation.net/matrix.cfm

Vision: http://networknation.net/global/vision.cfm

Collaborative outline processor example: http://networknation.net/global/matrix.cfm?gc=4cb206bfeb8584293dbcf08a17a66b3a

 

Thanks for your patience, insight and good humor.

 

- Bruce Schuman

Santa Barbara – Go Warriors!

(hasty editing is their fault)

 

 

 

 

 

 

 

-----Original Message-----
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F Sowa
Sent: Sunday, June 14, 2015 1:22 PM
To: ontolog-forum@xxxxxxxxxxxxxxxx
Subject: Re: [ontolog-forum] The "qua-entities" paradigm

 

Joel Luis, Bruce, and Gian Piero,

 

This thread gets into a huge number of issues.  I'd like to relate them to a 5-day course that I taught on "Patterns of Logic and

Ontology":  http://www.jfsowa.com/talks/patolog1.pdf

 

Slide 3 of lecture 4 (copy below) raises the issue of descriptive ontologies (i.e., empirical science) vs. normative ontologies (i.e., standards, conventions, policies, and project designs).

 

Whenever anybody proposes an ontology, you have to ask what problem it was designed to solve.  Is it intended as a scientific theory?

An engineering theory for constructing something?  A standard for a wide range of projects?  A basis for natural language semantics?

Who are the intended users?  What will they do with it?  How?  Why?

 

A scientific theory is always descriptive and always *fallible*.

An engineering theory is an approximation to some scientific theory that is sufficient (within the tolerances of the measuring methods) for some application or range of applications.

 

If it's designed for NLP, what is the subject, genre, application?

How general does it need to be?  Is there some standard or widely used terminology for that field?

 

Joel Luis

> I found the approach proposed by Giancarlo Guizzardi, very interesting.

> http://www.inf.ufes.br/~gguizzardi/quaEntities.pdf

> His approach was included in the so called Unified Foundational

> Ontology (UFO): http://doc.utwente.nl/50826/1/thesis_Guizzardi.pdf

 

Bruce

> Giancarlo Guizzardi is a poet as well as a philosopher and a scientist.

> He has a brilliant instinct for simple articulate clarity.

> 

> his entire thesis addresses this charming concern with high precision

> and balance...

> 

> “In summary, the position defended here subscribes to Mylopoulos.

> dictum (Mylopoulos, 1992) that .[t]he adequacy of a conceptual

> modelling notation rests on its contribution to the construction of

> models of reality that promote a common understanding of that reality

> among their human users..”

 

I agree that Giancarlo's thesis is interesting and good of its kind.

But I would ask the above questions:  What problem(s) does it address?

Is it intended to be descriptive (for science, engineering, NLP) ?

 

Or is it intended to be normative (standard)?  If so, who would use it?  For what kinds of projects or purposes?  How?  Why?

 

The quotation by Mylopoulos (who has worked on problems in databases and knowledge bases for many years) suggests that he intended it for knowledge sharing.  But as slide 2 of patalog4.pdf shows, the results have not been promising.  For about 100+ URLs of related projects since 1980, see http://www.jfsowa.com/ikl

 

Whenever anybody proposes YAFO (Yet Another Formal Ontology), I always ask:  How does it compare to Cyc?  They have been working on Cyc and OpenCyc for over 30 years with many very good logicians, philosophers, linguists, and computer scientists as consultants.

In what way does your system solve problems that they haven't solved?

 

I don't believe that Cyc is ideal, and I recognize that it has not been as successful as they had hoped.  But anybody who proposes YAFO must explain why they claim that in X years they developed a better solution than Cyc did in 1000 person-years of R & D.

 

I do not believe that a universal YAFO is possible or desirable.

But I believe that some partial YAFOs can be useful for some purposes.

And I think that a multiplicity of YAFOs is better than one.

 

Gian Piero

> the usual criticisms as, first of all, the unnecessary multiplication

> of individuals.

 

What do you mean by 'individuals'?  Do you mean entities in the physical universe?  Ways of talking about entities?  Or more general signs (icons, indexes, and symbols of aspects of the universe)?

 

When you talk about large animals, the ways of counting them are fairly clear.  But even for human beings, the boundary between the animal and the environment is very uncertain.  For other living things the boundaries are extremely uncertain.  A hundred Aspen trunks, for example, may be counted as 1 individual or 100 individuals

-- either way, the boundaries are very uncertain.

 

For nonliving things, all bets are off.  The Ship of Theseus is one of the *simpler* puzzles.  In logic and mathematics, identity conditions are very sharp and clear.  But when you want to apply logic and math to the world, everything is uncertain, arbitrary, and ad hoc.

 

In science and engineering, identity is *never* observable.

Similarity is observable, and identity is *always* an inference.

 

For more about these issues, http://www.jfsowa.com/pubs/signproc.pdf

 

John

_______________________________________________________________________

 

Slide 3 of http://www.jfsowa.com/talks/patolog4.pdf

 

                   RELATING MULTIPLE ONTOLOGIES

 

The lack of consensus is inevitable.

● Different applications, different fields, different requirements.

● General-purpose systems require multiple paradigms.

● Many logics, many reasoning methods, many ontologies.

 

Descriptive ontology is always fallible:

● Describes the concepts of empirical sciences and everyday life.

● Must accommodate anything anyone observes or does.

● Changes with every new discovery or theory.

 

Normative ontology is only true by convention:

● Specifies detailed conventions for specific fields or applications.

● Changes with every new policy, invention, or innovation.

 

For interoperability among heterogeneous systems, ● An underspecified, descriptive upper level ontology.

● Open-ended variety of descriptive or normative microtheories.

_________________________________________________________________

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/

Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/

Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx

Shared Files: http://ontolog.cim3.net/file/ Community Wiki: http://ontolog.cim3.net/wiki/ To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>