Dear Ed,
You wrote:
(b) you did not recognize
that you were performing some transmogrification of the real/natural concepts
to get the OO rendering.
My bitter experience is that
many students of the 1990s and later were taught that the transmogrification
produced by (b) is the correct natural ontological model! That is why I cannot
agree with you.
I don't get the "transmogrification" idea you are
stating above. But it IS true that the SE OO version will of necessity
minimize the duplication of functions and data. If that means I am performing
such a transmogrification of the real/natural concepts to get the OO
rendering.
Please explain a little more clearly so we can get to the bottom
of our difference on this issue.
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
From:
ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Edward
Barkmeyer
Sent: Tuesday, October 20, 2015 4:56 PM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] A Question About Mathematical Logic
Dear Rich,
I think we have different views of the world.
Object-oriented design and programming is about designing and building
computational implementations, with a dozen mechanisms that simplify some
aspects of that process and mirror some ontological notions. You insist
that there is no distinction between the ontological notions you associate with
the OO features and the features themselves. I insist that there is a
difference, and that difference becomes important when an aspect of the world I
want to model has no convenient analog in the OO feature set. If you have
never encountered that situation, I can offer two explanations you won’t like:
(a) your experience is not as broad as it might be, or
(b) you did not recognize that you were performing some
transmogrification of the real/natural concepts to get the OO rendering.
My bitter experience is that many students of the 1990s and
later were taught that the transmogrification produced by (b) is the correct
natural ontological model! That is why I cannot agree with you.
I can understand that you might offer:
(c) Ed is not so well educated in OO design and programming as
he thinks.
That is, of course, possible. J
-Ed
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Saturday, October 17, 2015 11:01 AM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] A Question About Mathematical Logic
Dear Ed,
You wrote:
I will take issue with your
statement below:
> Every software object
contains properties, state, methods, context, classes of objects, instances of
objects and events.
Consider a COBOL
program. It contains data elements and data structures and specifications
of methods involving them. The programmer understands the nature of some
of those methods as computing data values according to formulae, and the nature
of other methods to be reorganizing the data elements for some presentation
purposes or some computational purpose such as “lookup”. That
is what the software object contains, and nothing more.
Back in the sixties and seventies, that would be right.
But nobody programs in plain vanilla COBOL any more. The most recent
COBOLs are object-oriented. So you don't now code in the simple flat way
which older COBOL programs were written. I suppose it is still possible
to write vanilla COBOL even in object oriented COBOL (OOC) languages, but that
is not good engineering practice any more.
For example, even in the year 2000, ValuTech used OOC to write
COBOL programs for F500 companies which still used IBM mainframes. Only a
few of us wrote new stuff in Delphi, but both languages support objects,
properties, events, etc.
Likewise, nobody still programs in Fortran, C, Pascal,
.... Instead, those languages have given way 95% (pure guess) to the OO
languages like C++, Delphi, Object Lisp, and even the newest Ada has object
oriented features.
What you are stating is correct for the olden days, when we all
wore short pants, but now that we are all grownups, only OO compilers are used
by serious software engineers with commercial or military missions. Do
hobbyists use the non OO languages? Perhaps some do. But the vast
majority of code is now written in OOLs, where there are object, properties,
events, etc.
Then you wrote:
Object-oriented programming
does not change that; it just changes what the data structures are
called. It does create a more powerful programming capability that
mirrors taxonomic classification (and occasionally interferes with other
classification systems).
As a software engineer, I use objects, properties and events the
way they are intended - as true class implementations. While it might be
true that you can program OO routines in non OO ways, that is more of a badly
taught programmer than a badly built language.
The point is that an OO
‘class’ is a computational representation of an aspect of an ontological
‘class’. The computational representation constrains the _expression_ and
the aspect is not at all the same thing as the class.
RC> I think when you use that word "ontological"
you are back into the philosopher's viewpoint rather than the engineers
viewpoint. It is true that knowledge CAN BE encoded in non OO
structures. But it is bad practice now that better tools are
available.
EB>The idea that those
data structures represent what we might call classes of objects and that a
certain data element represents some property of that object is not present in
the program per se. It may, to some extent, be present in the comments,
or implied by some of the data element names. The software engineer who
designed the data structures to support a particular business activity must know
what the context is, and have some understanding of the classes and properties
of objects being represented. But COBOL itself provides no direct means
for capturing that understanding or any of the notions: class, object,
property, state, event, or context. Those terms are not present in the
definition of the language. And the same is true of Fortran, C, Pascal,
Ada, etc.
I.e., for those languages that are not object oriented.
Now, in a language like Java
or C#, the terms ‘class’, ‘object’, ‘method’, are used. But ‘class’ means
a pattern for the instantiation of ‘objects’ as data structures. It is
asserted in the OO Design literature that a ‘class’ is a natural representation
of a real world category of things of interest, and that a ‘member’ (attribute)
of the class represents a property or relationship, but those notions do not
exist in the language.
Whoa! A class in OOD is literally a template, but it also
has its own methods, including the inherited, local, virtual, and every other
kind of method that is available to the programmer in OO languages. But a
class is not ONLY instantiated data structures. It includes methods for
that class which are not available for other classes, so the idea that a
philosophical behavior has meaningful interpretations that remain consistent
with the use of that object, its ancestors, and its instances.
Here is an example specification for TRelation in
Delphi which I wrote in Delphi in the 1990s:
unit unRelation;
{$WARNINGS OFF}
interface
uses Classes, stdctrls, extctrls;
{common streaming file used for all rsObject types}
var StreamFile : File of Byte;
type TRelationPredicate = function( D : Integer ) :
Boolean of Object;
TRelationAction = procedure ( Xs
: Integer ) of Object;
type TTypeOfRelation =
(
itSimpleDomain, //strings, integers,reals,
...
itPairDomain, //TElkPair - pair
of TElkThing
itTripleDomain, //TElkTriple - three
TElkThings - cortical edge, labeled w f(parameters)
itThingDomain,
//TElkName, TElkType, TElkValue
itSymbolTableDomain, //TElkThing
- a column
itOrderingDomain, //TElkOrdering |- universal joint
itPredicateDomain,
//TElkPredicate : Boolean
itActivityDomain, //TElkActivity
itUndefinedDomain );
//TElkUndefined.
type TypeIndex = Integer; {will be replaced later
by ArrivalID into assigned table lookup}
type TRelationRow =
pointer;
TRelationPagePtr = pointer;
TPageHeader = record
HTag
:
TTypeOfRelation;
HRowSize,
HMaxRowsPerPage : Integer;
HDuplicates
: Boolean;
HSizable
: Boolean;
HNRows,
HThisRow
: Integer;
end;
TComparison = (itLess, itEqual,
itGreater, itUnknown);
TCompareRowsEvent = function(A,B :
TRelationRow) : TComparison of Object;
TCopyRowEvent = procedure(A,B :
TRelationRow) of Object;
TChangedCursorEvent = procedure
of Object;
TApplyProcedure = procedure(A :
TRelationRow) of Object;
TIsSubsetEvent = function(A :
TRelationRow) : Boolean of Object;
TApplyPredicate = function(A :
TRelationRow) : Boolean of Object;
TUpdatePredicate = function(Newer,Older
: TRelationRow) : Boolean of Object;
TUpdateProcedure =
procedure(Newer,Older : TRelationRow) of Object;
These are true OO developments with full power of OOP.
Note the use of Objects, Properties, Events, and the other OO constructs.
Note the use of properties, events, classes, and inheritance.
This is the way software engineers wrote now.
EB>It has been my
experience that many software engineers reduce a conceptual space to a set of
data structures and understand the conceptual space only in
terms of those data structures. Object-oriented
programming does not change that; it just changes what the data structures are
called.
RC>Again, we disagree. These "data
structures" contain only the data part of the object, not the methods,
events, or anything else that represents the more contained representations now
used in OO languages, and certainly in large application programs. There
are also executable parts, ranging from fragments for accessing properties
given the object, to the inheritance lookup functions that distinguish between
virtual and actual methods, and which must be interpreted to determine the
meaning of the function.
It does create a more
powerful programming capability that mirrors taxonomic classification (and
occasionally interferes with other classification systems). The point is
that an OO ‘class’ is a computational representation of an aspect of an
ontological ‘class’. The computational representation constrains the
_expression_ and the aspect is not at all the same thing as the class.
Perhaps you are using the word "aspect" in a strange
way I am not familiar with. If by aspect, you mean the classes,
instances, events, etc, then I believe those are real implementations of the
ontological object representations, and therefore what the software does with
the representation is more what you should focus on instead of just on the data
structure parts.
EB> Software engineering,
therefore, is also the utilization of a set of devices, mechanisms, etc., to
produce a machine that exhibits a target functionality. *Some* software
engineers begin by modeling the conceptual space and validating that model with
the domain experts, and then map the concepts to computational structures and
mechanisms. But many don’t!
Yes, there are bad programmers just as there are bad
politicians, bankers, voters, and sliced bread. But the appropriate
practice in software engineering is now OOD, meaning the full Magilla, not just
data structures.
But the greatest difference between the OO systems and the non
OO systems is the event structuring. I remember the bad old days when
every program had a control part that was studied like it was a linear system
that had to be kept within time and storage constraints. That is no
longer an issue in MOST software applications. The speed and storage of
computers has grown to the point where it is truly possible to model the OO
features including nested events, nested objects, nested property lists.
EB> Put another way, if
there is a rote mapping from your concept system to Java, you don’t have a very
interesting concept system. (Or you have constructed the concept set
under Java constraints rather than domain constraints.) In most cases,
90% of the domain concept system has a rote mapping to Java, but it is that
last 10% that requires you to “use ontology” rather than OO Design in modeling
the space.
Software engineers are concerned with making the system function
according to the requirements. They don't normally write the requirements
- that is done by the customer, or by an systems engineer who understands the
requirements in operational terms, but does not understand the impact of requirements
on the software needed to make the system run properly, reliably, and
predictably.
EB> I work in a community
that is trying to *teach* software engineers to (a) use good engineering
practices, and (b) “use ontology” in some conceptual modeling sense. But
in 25 years we have made only moderate inroads in the general practice of
software engineering.
I would expect that from the fact that most
"programmers" are not true software engineers. They are geology
majors, physicists, philosophy majors, mathematicians and English Lit majors
who want to be able to earn a living.
In the many reviews (office actions, inter partes review, etc)
before a patent is issued, there is a virtual person called the PHOSITA, or
POSITA, meaning person having ordinary skills in the art. For litigation
reviewing the issued patent for validity, both sides must determine what a
POSITA is (usually, it is set as having a recent
degree in CS with three or more years experience). Those
"programmers" you speak of are more like old guys who never updated
their skills because their real interest is in geology, physics, philosophy,
math and English lit. That is like calling your garage's mechanic a
physicist because he works with engines.
And I can assure you that
there are numerous ignorant software engineers out there who will be only too
happy to agree with your assertion that they “use ontology” all the time (and
don’t need to learn anything about modeling). And that is why I
object.
That is a good reason to object, but not to object to the OO
best practices. You have great reason to object that the people you are
calling "programmers" wouldn't be acceptable POSITAs in any
case.
You personally are doubtless
“enlightened”, but it is a mistake to believe that the median software engineer
is.
-Ed
Thanks for the sweet endorsement!
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
Rich,
Hello Ed,
You wrote:
If engineers “use ontology
all the time”, then the same is true of farmers and auto mechanics and
Hollywood starlets.
I should have said that SOFTWARE ENGINEERS use
ontology all the time. Every software object contains properties, state,
methods, context, classes of objects, instances of objects and events. That
is how and why SOFTWARE ENGINEERS use ontology all the time in a way that has
nothing in common with farmers and auto mechanics and Hollywood starlets.
Engineers have a concept of
the target functionality of the device they build, some concept of the
restrictions on the nature of that device, and a mental store of device
mechanisms, means of accomplishing elements of the target functionality, and
means of testing for required and desired properties. That is what
“engineers use” in performing their trade, along with the supporting reasoning
skills. Now, what part of that is what you mean by “ontology”?
I agree with that description for most
other kinds of engineers, but not for software
engineers. With the change to software engineers, I support that
prior statement I made, but with other kinds of engineers, I retract it.
Thanks for pointing out my unconscious bias on that statement, Ed. This
is more precise.
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
Rich,
I have been in agreement with large parts of your position, to
the extent that I could determine what you meant, but this one is simply
confused.
If engineers “use ontology all the time”, then the same is true
of farmers and auto mechanics and Hollywood starlets. Engineers have a
concept of the target functionality of the device they build, some concept of
the restrictions on the nature of that device, and a mental store of device
mechanisms, means of accomplishing elements of the target functionality, and
means of testing for required and desired properties. That is what
“engineers use” in performing their trade, along with the supporting reasoning
skills. Now, what part of that is what you mean by “ontology”?
To the extent that that store of knowledge is systematized, one
could describe it as a 'concept system', which might be an interpretation of
the term 'ontology'. But a 'concept set' is not an 'ontology', in either
the philosophical sense or the knowledge engineering sense. The
difference between a 'set of concepts' and a 'concept system' lies in the word
other contributors have emphasized – coherence. And the difference
between a concept system and philosophical 'ontology' is the coincidence of the
system with the observed world.
Most importantly, engineering is about creating things that
don’t yet exist, and that does not seem to require a fundamental systematic
basis for what does exist. Engineers often have a largely unvalidated
concept set for the world they care about. And in many cases, if a device
accomplishes the target function, other undesirable and incomprehensible design
features can be ignored, which furthers the ignorance that begot them.
While Thomas worries about what requirements changes can be foreseen, it
has been my experience that many engineers don’t understand the impact of their
component design on other parts of the product that already exist. And
once multiple engineers become involved in a system design, the idea that there
is a single underlying concept system can usually be dismissed outright.
The function of 'ontology' in the knowledge engineering sense is
to document the common concept system that will be used to guide the
development of all the parts of the software product, including the meaning of
requirements and restrictions. It is part of the model – validate – build
sequence that we are trying to teach software engineers, as distinct from the
seat-of-the-pants envisage – build – test – hack approach that most software
engineers actually use.
In so many words, I do not think that most software engineers
actually use 'ontology' in any sense. I do agree however that the
emerging discipline restricts the relevant 'ontology' to that which is needed
for the task at hand. The interesting problem with scope turns out to be
determining where you can safely stop. And that goes directly to the
“what have we forgotten” point that Thomas makes – what aspects of the
operating environment might be anticipated to affect the viability of the
product?
-Ed
P.S. I don’t even want to think about the 'domain
interpreter' idea.
Dear Tom
You wrote:
Ontology engineers who plug
in lexical items for concepts in non-controversial fragments of ontologies,
don't have to do ontology. In that, I agree with you.
I disagree! All software engineers use ontologies in every
program they write. They absolutely DO HAVE TO DO ONTOLOGY. And any
attempt to build a large software system without an ontology is doomed.
That seems to be the part you are not getting.
Engineers use ontology all the time.
But they use only so much as they need for each
application, and they spend a lot of time at eliminating unnecessary
representations because it is good engineering practice to do so.
Perhaps the kind of products you are describing should be
wrapped up in a DLL that can be added to a project so it would inherit all
those philosophical alternatives without a lot of extra programming and
engineering.
For example, you could write a domain interpreter
which you can stick into any DBMS to add to the set of basic domains (integer,
text, date, ...). That interpreter would know how to work with some basic
set of philosophical concepts which you expect to be commonly used, if such
commonality exists. Perhaps all the Xdurants could be in one DLL and all
the Currency, or Threats, or Resources, or other concepts you want to export
could be packaged in a small set of DLLs. That would leave a lower cost
threshold for adding those commonly used concepts. But you would still
require more expenses in software engineering to expand into the SECOND system
with the DLLs being referenced by more software.
Predurants, Postdurants, Endurants, and etc would be example
domains in that scenario.
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
From: Thomas Johnston [mailto:tmj44p@xxxxxxx]
Sent: Thursday, October 15, 2015 9:29 AM
To: Rich Cooper; '[ontolog-forum] '
Subject: Re: [ontolog-forum] A Question About Mathematical Logic
I don't think I have misunderstood
the engineers in this thread, and I haven't seen any comments yet that persuade
me I have.
Your "Philosophy", if it's
in reference to my comments, is formal ontology, especially upper-level
ontology. I do agree that lower-level ontology fragments -- product hierarchies
is an example I often use -- can stand alone, without any "philosophical
justification". But if later on, someone wants to create a unified
ontology of income-producing instruments for a vertical industry group, then
each company's ontology fragment has to be integrated/bridged to the others.
Suppose company X doesn't distinguish between products (e.g. tile for a
bathroom) and services (the installation of those tiles in a customer's home);
X has an ontology which includes ":installed bathroom flooring", but
company Y, on the other hand, has no such item in its inventory. For Y, floor
tiling is a product, and installation is a service that uses that product.
The "philosophical"
question here is whether the mid-level ontology (for the vertical industry
group) should include "installed product" as an ontological category.
Whether or not it does, X and the other companies who use "installed
product" as a category, or Y and the other companies who do not, will have
to do a lot of database re-engineering in order to conform to the industry
group ontology. "Doing philosophy", in the sense of establishing
ontologies, gets right down to nitty-gritty software engineering work for the
engineers who thought that high-flown upper-level ontology work was irrlevant
to the practical work that engineers do.
Later on, even upper-level ontologies
can become relevant to such nitty-gritty work as re-engineering databases. In
scientific databases, it matters whether space and time are represented as
discrete or continuous. With respect to my own work, I have suggested that
distinguishing three temporal dimensions in databases enables us to record and
retrieve important information that the ISO-standard two temporal dimensions
cannot. And I would emphasize that introducing a third temporal dimension was
not just a matter of adjusting software to manage temporal triples instead of
temporal pairs. If that's all it were, then a software engineer might think
that every date/time piece of metadata for the rows of a database table would
constitute a new temporal "dimension"; and that outlook has lighted
many fools the way to dusty software death.
Instead, a third temporal dimension
is introduced based on the "philosophical" distinction between (i)
inscriptions of declarative sentences (rows in tables), (ii) the statement that
multiple copies of the same row are inscriptions of; (iii) the propositions
that synonymous statements are expressions of; and (iv) the propositional
attitudes that users of databases express when they update databases, and
presuppose when they query those databases. It is also based on an extension of
Aristotle's basic ontology, an extension which I describe in Chapter 5 of my
oft-alluded-to book.
This is doing philosophy, in anyone's
book. It is what led me to recognize the existence of a new ontological
category -- the temporal dimension I called "speech-act time" in my
book. Ontology engineers who plug in lexical items for concepts in
non-controversial fragments of ontologies, don't have to do ontology. In that,
I agree with you. But once those engineers are tasked with extending their
constructs beyond the non-controversial scope of those constructs, they get in
trouble (cf my "customer" example, also my "installed
product" and third temporal dimension discussed in this comment). They get
in trouble because they are then confronted with the need to do ontology; they
enter an arena in which they will be forced to "do philosophy" --
which is something that they feel in their guts (and have often expressed in
this forum) is irrelevant to the "REAL" work that engineers do.
I don't expect to convince you. But I
do believe it's worth trying to say, as clearly as I can, what my understanding
of the issue of doing formal ontology vs. doing ontology engineering is. My
understanding is that both are important, that, to paraphrase Kant, "ontology
without engineering is empty; engineering without ontology is blind".
I have seen several remarks, by the
engineers among us, about ontology and semantics being irrelevant
to the work they do, being irrelevant, as you put it, to "real engineering
problems". But I have also seen the confusion engineers create when they
work with anything other than uncontroversial ontology fragments, e.g. a
company's product hierarchy.
No! You're missing the point
about engineering. Philosophical justifications for
ontologies is what I, perhaps among others, disagree with. But the need
for an ontology within a complex software architecture, and therefore the need
for clear precise semantics for interpreting that ontology's components,
in all situations, is a primary engineering concern.
It is only the philosophy part, the
attempt to link application ontologies to some overarching totality of
existential ontology insisted upon from that philosophical perspective that
perturbs this engineer, likely others. Its adding unnecessary complexity
to the architecture of the software, which should be minimized, not
expanded.
Every addition of one more component
to an ontology drives its complexity up in an exponential curve. Not a
good thing for developing software especially. So adding even more
components having only philosophical justification and not specifically application
justification is the wrong direction, IMHO.
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel
DOT com
From: Thomas
Johnston [mailto:tmj44p@xxxxxxx]
Sent: Tuesday, October 13, 2015 6:05 PM
To: Rich Cooper; '[ontolog-forum] '
Subject: Re: [ontolog-forum] A Question About Mathematical Logic
Like an earlier comment, yours
emphasizes, I believe, the need to discuss (i) the difference between formal
ontology and ontology engineering (which is roughly the difference between
theory and practice), and (ii) the problems that arise when ontology engineers
finding themselves having to do ontology, rather than having to just plug
uncontroversial mini-ontologies into some well-defined framework (like Protege)
or into a framework/template toolkit like OWL/RDF. I intend to do this in a new
thread, and soon.
I have seen several remarks, by the
engineers among us, about ontology and semantics being irrelevant to the work
they do, being irrelevant, as you put it, to "real engineering
problems". But I have also seen the confusion engineers create when they
work with anything other than uncontroversial ontology fragments, e.g. a
company's product hierarchy.
As an ontologist, and a person
somewhat familiar with systems of logic, I nonetheless appreciate the
importance of getting ontologies into frameworks. That, in my opinion, is what
puts the semantics in the Semantic Web -- it gives automated systems, doing cross-database
queries, the ability to understand cross-database semantics. (Pat Hayes to
correct me, please, if I'm off course here.)
An example I have come across in
every one of two dozen enterprises I have worked for, is the question:
"What is a customer?", where that question, more fully, means
"What does your enterprise take a customer of yours to be?" I have
never found subject matter experts who have been able to answer that question,
without a good deal of help from me. And the help I provide is help in doing
ontology clarification work, not help in plugging lexical items representing
ontological categories into an ontology tool. Moreover, I have never found two
enterprises whose experts defined "customer" in exactly the same way.
From which it follows that a
cross-database query that assumes that two tables named "Customer
Table", in two different enterprise's databases, are both about customers,
is almost certain to be mistaken. Both tables may be about fruit, but there is
certain to be an apples and oranges issue there.
A formal ontology which includes
customers, on the other hand, might be able to distinguish apples from oranges
if it could access an ontology framework about customers. Given that the
concepts have been correctly and extensively-enough clarified, here is where
the ontology engineer proves his worth.
But to define the category Customer
clearly enough, it isn't engineering work that needs to be done. It's the far
more difficult (in my opinion) ontology clarification work that needs to be
done. (I expand on this example in the section "On Using Ontologies",
pp. 73-74 in my book Bitemporal Data: Theory and Practice. I think I also
elaborated on it a few weeks or months ago, here at Ontolog.)
So I think that engineers who suggest
that clarifying ontological categories is irrelevant to their work as ontology
engineers, are mistaken. Such work seems mistaken to them, I think, because
most of the ontologies they put into their well-defined frameworks are
relatively trivial, i.e. are ontologies that subject matter experts have no
trouble agreeing on. The lower-level the ontologies we engineer, the more that
will tend to be the case.
But ascend into mid-level or
upper-level ontologies, and ontology engineers get lost, and don't know how to
find a clear path through the forest whose trees are those categories. And so
instead of admitting "We're lost", they say instead "We strayed
into a swamp that has nothing to do with the real engineering work we do --
which turns out to be the relatively straightforward work of plugging labels
for uncontroversial ontological categories, and taxonomies thereof, into
Protege or its like".
I say, on the contrary, that
conceptual clarification work in mid- and upper-level ontologies have
everything to do with ontology engineering, and are where the really difficult
work of that engineering is done. An analogy: machine-tooling parts is the hard
work of manufacturing; assembling those parts is the easy work.
And my apologies to Leo, Pat and
other whose comments on my question I have not yet responded to. I will, and
soon. And I thank them and all other respondents for helping me think through
the question I raised.
Although the approach you are
suggesting might entertain some philosophical questions, and therefore be
entertaining to philosophers, it has little or no relevance to real engineering
problems, which almost never are applied to the actual universe of every
possible entity - i.e. infinite supplies.
In engineering applications, Ex(...)
would normally apply only to finite sized, or traversably infinite sized,
problems. The importance of scope in engineering, i.e., where you draw
the lines around what is a system, which contains all the entities, enumerators
of variables, constants and functions in real problems.
Even unbounded engineering problems
have limits to the possible types that can be used, though mechanisms like
stacks, or even Turing machines with infinite square supplies, attempt to
approximate boundless sizes.
So I suggest your title should be A
Question About Mathematical Logic, since engineers who consider themselves
logic designers would find the ideas impractical, though linguists might be
more interested.
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel
DOT com
Suppose someone else asserts,
instead, that "No dogs are renates". Certainly, to do that, that person
must believe that there are such things as dogs and, in addition, believe that
none of them are renates (a false belief, of course).
On Tuesday, October 13, 2015 11:57
AM, Thomas Johnston <tmj44p@xxxxxxx>
wrote:
My intuitions tell me that anyone who
asserts "All dogs are renates" believes that there are dogs (i.e. is
ontologically committed to the existence of dogs) just as much as someone who
asserts "Some dogs are friendly".
Suppose someone else asserts,
instead, that "No dogs are renates". Certainly, to do that, that
person must believe that there are such things as dogs and, in addition,
believe that some of them are not renates (a false belief, of course).
Now for "Some dogs are
friendly", and also "Some dogs are not friendly". In both cases,
we all seem to agree, someone making those assertions believes that there are
dogs.
Now I'm quite happy about all this.
If I make a Gricean-rule serious assertion by using either the "All"
quantification or the "Some" quantification, I'm talking about
whatever is the subject term in those quantifications – dogs in this case. I'm
particularly happy that negation, as it appears in the deMorgan's translations
between "All" statements and "Some" statements, doesn't
claim that a pair of statements are semantically equivalent, in which one of
the pair expresses a belief that dogs exist but the other does not.
But in the standard interpretation of
predicate logic, that is the interpretation. In the standard interpretation,
negating a statement creates or removes the _expression_ of a belief that
something exists. My beliefs in what exist can't be changed by the use of the
negation operator. Apparently, John's beliefs can, and so too for everyone else
who feels comfortable with predicate logic as a formalization of commonsense
reasoning, and with the interpretation of one of its operators as "There
exists ....".
I usually don't like getting into tit
for tats. Those kinds of discussions always are about trees, and take attention
away from the forest. But I'll make exceptions when I think it's worth taking
that risk (as I did in my response to Ed last night).
From John Sowa's Oct 12th response:
TJ
>
why, in the formalization of predicate logic, was it decided
>
that "Some X" would carry ontological commitment
Nobody
made that decision. It's a fact of perception. Every
observation
can always be described with just two operators:
existential
quantifier and conjunction. No other operators can
be
observed. They can only be inferred.
(1) If all ontological commitments
have to be based on direct observation, then we're right back to the Vienna
Circle and A. J. Ayer.
(2) And what is it that we directly
observe? A dog in front of me? Dogs, as Quine once pointed out, are ontological
posits on a par with the Greek gods, or with disease-causing demons. (I am
aware that this point, in particular, will likely serve to reinforce the
belief, on the part of many engineering types in this forum, that philosophy
has nothing to do with ontology engineering. That's something I want to discuss
in a "contextualizing discussion" I want to have before I pester the
members of this forum with questions and hypotheses about cognitive/diachronic
semantics. What does talk like that have to do with building real-world
ontologies in ontology tools, in OWL/RDF – ontologies that actually do
something useful in the world?
(3) I wouldn't talk about some dogs
unless I believed that some dogs exist. And if some dogs exist, then all dogs
do, too. Either there are dogs, or there aren't. If there are, then I can talk
about some of them, or about all of them. If there aren't, then unless I am
explicitly talking about non-existent things, I can't talk about some of them
nor can I talk about all of them, for the simple reason that none of them
exist. To repeat myself: if any of them exist, then all of them do.
(4) And I am, of course, completely
aware that trained logicians since Frege have been using predicate logic, and
that, at least since deMorgan, have been importing to negation the power to
create and remove ontological commitment.
(5) Here's a quote from Paul Vincent
Spade (very important
guy in medieval logic and semantics):
"This doctrine of “existential
import” has taken a lot of silly abuse in the twentieth century. As you may
know, the modern reading of universal affirmatives construes them as quantified
material conditionals. Thus ‘Every S is P’ becomes (x)(Sx ⊃ Px), and is true, not false, if there are
no S’s. Hence (x)(Sx ⊃ Px) does not
imply (∃x)(Sx). And that
is somehow supposed to show the failure of existential import. But it doesn’t
show anything of the sort .... "
So Spade approaches this as the issue
of the existential import of universally quantified statements. He points out
that, from Ux(Dx --> Rx), we cannot infer Ex(Dx & Rx). The rest of the
passage attempts to explain why. I still either don't understand his argument,
or I'm not convinced by it. Why should "All dogs are renates" not be
expressed as Ux(Dx & Rx)?
From John's reply, I think he would
say that it's because we can only observe particular things; we can't observe
all things. But in the preceding points, I've tried to say why I don't find
that convincing.
(6) Simply the fact that decades of
logicians have not raised the concerns I have raised strongly suggests that I
am mistaken, and need to think more clearly about logic and ontological
commitment. But there is something that might make one hesitate to jump right
to that conclusion. It's Kripke's position on analytic a posteriori statements
(which I have difficulty distinguishing from Kant's synthetic a priori
statements, actually -- providing we assume that the metaphors of
"analytic" as finding that one thing is "contained in"
another thing, and of "synthetic" as bringing together two things
first experienced as distinct, are just metaphors, and don't work as solid
explanations).
All analytic statements are
"All" statements, not "Some" statements. Kripke suggests
that the statement "Water is H2O" is analytic but a posteriori. In
general, that "natural kind" statements are all of this sort. Well, a
posteriori statements are ones verified by experience, and so that would take
care of John's Peircean point that only "Some" statements are
grounded in what we experience.
I don't know how solid this line of
thought is. But if there is something to it, that might suggest that if we
accept Kripke's whole referential semantics / rigid designator / natural kinds
ideas (cf. Putnam's twin earth thought experiment also), then perhaps we should
rethink the traditional metalogical interpretation of "All dogs are
renates" as Ux(Dx --> Rx), and consider, instead, Ux(Dx & Rx).
Well, two summing-up points. The
first is that Paul Vincent Spade thinks that my position is "silly",
and John Sowa thinks that it's at least wrong. The second is that such
discussions do indeed take us beyond the concerns of ontology engineers, who
just want to get on with building working ontologies.
As I said above, I will address those
concerns of ontology engineers before I begin discussing cognitive semantics in
this Ontolog (Ontology + Logic) forum.
Tom, Ed, Leo, Paul, Henson,
TJ
> why, in the formalization of predicate logic, was it decided
> that "Some X" would carry ontological commitment
Nobody made that decision. It's a fact of perception. Every
observation can always be described with just two operators:
existential quantifier and conjunction. No other operators can
be observed. They can only be inferred.
EJB
> I was taught formal logic as a mathematical discipline, not
> a philosophical discipline. I do not believe that mathematics
> has any interest in ontological commitment.
That's true. And most of the people who developed formal logic
in the 20th c were mathematicians. They didn't worry about
the source or reliability of the starting axioms.
Leo
> most ontologists of the realist persuasion will argue that there
> are no negated/negative ontological things.
Whatever their persuasion, nobody can observe a negation. It's
always an inference or an assumption.
PT
> on the inadequacy of mathematical logic for reasoning about
> the real world, see Veatch, "Intentional Logic: a logic based on
> philosophical realism".
Many different logics can be and have been formalized for various
purposes. They may have different ontological commitments built in,
but the distinction of what is observed or inferred is critical.
HG
> I keep wondering if this forum has anything useful to offer the
> science and engineering community.
C. S. Peirce was deeply involved in experimental physics and
engineering. He was also employed as an associate editor of the
_Century Dictionary_, for which he wrote, revised, or edited over
16,000 definitions. My comments below are based on CSP's writings:
1. Any sensory perception is evidence that something exists;
a simultaneous perception of something A and something B
is evidence for (Ex)(Ey)(A(x) & B(y)).
2. Evidence for other operators must *always* be an inference:
(a) Failure to observe P(x) does not mean there is no P.
Example: "There is no hippopotamus in
this room"
can only be inferred iff you have failed to observe
a hippo and know that it is big enough that you
would
certainly have noticed one if it were present.
(b) (p or q) cannot be directly observed. But you might
infer
that a particular observation (e.g. "the room
is lighted")
could be the result of two or more sources.
(c) (p implies q) cannot be observed, as Hume discussed at
length.
(d) a universal quantifier can never be observed. No matter
how many examples of P(x) you see, you can never
know that
you've seen them all (unless you have other
information
that guarantees you have seen them all).
TJ
> But now notice something: negation creates and removes ontological
> commitment. And this seems really strange. Why should negation do this?
The commitment is derived from the same background knowledge that
enabled you to assert (or prevented you from asserting) the negation.
> I'd also like to know if there are formal logics which do not
> impute this extravagant power of ontological commitment /
> de-commitment to the negation operator in predicate logics.
Most formal logicians don't think about these issues -- for the
simple reason that most of them are mathematicians. They don't
think about observation and evidence.
CSP realized the problematical issues with negation, but he also
knew that he needed to assume at least one additional operator.
And negation was the simplest of the lot. Those are the three
he assumed for his existential graphs. (But he later added
metalanguage, modality, and three values -- T, F, and Unknown.)
John
PS: The example "There is no hippopotamus in this room" came
from
a remark by Bertrand Russell that he couldn't convince Wittgenstein
that there was no hippopotamus in the room. Russell didn't go
into any detail, but I suspect that Ludwig W. was trying to
explain the point that a negation cannot be observed.