Yes, that gives me an overview I can work with. Thanks Hans,
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
From:
ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Hans Polzer
Sent: Wednesday, October 14, 2015 2:49 PM
To: '[ontolog-forum] '
Subject: [ontolog-forum] A question of scope
Rich,
I changed to subject line to start a new thread so those more
interested in the original question posed by Tom, I believe, can skip over this
one. Also, the answers to the questions you ask could fill a book, so I will
point you to the SCOPE model document itself, but also give a necessarily brief
response to the questions that will give you some idea of what I am talking
about, but probably cause you to ask more specific questions.
First, the SCOPE model document (not a quick read!): http://www.ncoic.org/images/technology/SCOPE_MODEL_VER1.0.pdf
Second, a bit of background. The first version of SCOPE came out
of work I was involved with in trying to get a multiplicity of real-world
systems developed by a variety of organizations to work together for meaningful
operational purposes. While there were any number of technical
design/architecture issues involved and that had to be overcome or bypassed in
some way, by far the more significant issues that were encountered where the
scope boundaries drawn (or, more often, assumed) by the individual system
sponsors and developers. The SCOPE model collected these issues into categories
and abstracted/generalized them into what we called scope dimensions. The early
SCOPE model was contributed to NCOIC which subsequently made the model more general
and extended it in several important areas. SCOPE is a conceptual model written
in English prose (with due apologies to some of the contributors and readers
who complain about the lack of a good tech editor). It is not a computer
representation language, if I interpret what you mean by that correctly. Some
of the SCOPE dimensions could probably be encoded in such a language, but
others might be difficult to represent in such a fashion.
Regarding your mention of the paper on “Multiplicity of
Perspectives……”, this was done with Dr. Dan DeLaurentis and one of his graduate
students, Don Fry for a IEEE conference on Systems of Systems Engineering. It
does touch on the SCOPE model and another model for characterizing scope that
Dr. DeLaurentis had developed. However, it’s really more about the motivation
for scope models and what causes many (most?) interoperability problems to
arise in the first place. It posits that a major source of such problems are
events (or more gradual changes in the ecosystem) which disrupt or counter the
scope assumptions (explicit or implicit) prevalent at the time the systems were
being conceived and developed. Put differently, these systems didn’t have each
other on their respective radars when the requirements for each were drawn up.
And now something has happened that makes it more operationally effective to
have the systems work with each other (than continuing to pretend that the
other systems don’t exist). The paper gives some examples of real-world events
that have caused such context and scope shifts and the kinds of impacts or
problems these generated in the systems involved. I can send you a copy of the
full paper if you want – I’m not sure why it’s not available on the web unless
the IEEE is sitting on it.
Regarding your comments about software generation, yes, SCOPE
could help with that, but indirectly. I, too, was involved with software reuse
projects at DARPA using generation techniques and domain-specific software
languages and a domain analysis method to allow rapid configuration of software
components for a specific system instance. A key issue here is the dynamic
range of operational scope space that a given set of software components is
designed to address, and the way that this is represented in the software interfaces
or configuration parameters. The SCOPE model is more about exploring such scope
spaces in an informed manner before committing your requirements and software
architecture/design and data models to a specific operational scope space and
associated operational application contexts.
Which brings me to your last two questions. The key challenge
for systems of systems is the independence or autonomy of the systems involved,
and the many explicit and implicit system boundary scope decisions that went
into each. And in many pragmatic cases, these systems were developed by
different organizations working under different jurisdictions, business models,
purposes/objectives, organizational cultures and are at different stages in
their overall life-cycle (which leads to different degrees of willingness or
economic motivations for system modification or “evolution” to accommodate the
new interaction). The SCOPE model has dimensions for each of these types of
factors, as well as others. While there are some systems of systems that have a
central “owner” or authority responsible for making them all work together, the
degree of centralization or decentralization of control and resource
provisioning can be quite variable, even within the lifecycle of a single
system of systems, and certainly across some set of systems of systems. So
there is a set of SCOPE dimensions to characterize this centralization or lack
thereof.
Part of the application of SCOPE involves a pre-assessment of a
potential target SoS (or capability, operation, enterprise, product line, etc.)
to determine what the overall nature and scope of the target might be, and to
determine whether additional domain/target-specific scope dimensions might need
to be developed. SCOPE is not a closed model and invites extensions into
additional dimensions and domains. NCOIC has a guidebook and training material
for how to apply SCOPE in specific contexts. NCOIC generally recommends a
workshop approach with key stakeholders represented because of the significant
social and organizational dynamics that discussing scope boundaries entails.
The workshop is facilitated by a trained SCOPE practitioner and structured
using a tailored version of the SCOPE model in questionnaire/spreadsheet form.
Hope that’s enough to answer your immediate questions and give
you a feel for what SCOPE is and isn’t – and maybe would it could become or
could be used for in your particular subset of scope space.
Hans
From:
ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx]
On Behalf Of Rich Cooper
Sent: Wednesday, October 14, 2015 3:06 PM
To: '[ontolog-forum] ' <ontolog-forum@xxxxxxxxxxxxxxxx>
Subject: Re: [ontolog-forum] A Question About Mathematical Logic
Hans,
I found a reference to your paper
"Multiplicity of Perspectives, Context
Scope, and Context Shifting Events"
Apparently your SCOPE work is related to that, as in systems of
systems. I only saw the (free) abstract, but the specification of systems
in a computer representation language (Is that what the paper is about?) sounds
like the approach I was taking on my reusable software component library.
But the systems were software building blocks, not at in your context,
hardware, or communications, or other aspects of full system
specification.
The specification of a system's requirements, abstracted by its
components' I/O structures, should offer a lot of capability toward automatic
generation of software from reused software components, as well as generation
of pick lists and connections for the hardware components.
So I like the basic idea (which I admit is just my
interpretation of your work). Could you enlighten us a little more on how
you viewed the complexity of planning for systems of systems?
Also, please describe how the various development activities
leading to the result are coordinated in your planning, given the wide
variability of development activities?
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
Rich,
I don’t think our experiences have been all that different, but
how we look at them and what we emphasize and take away from them might be.
Sure, requirements can change in mostly unpredictable ways. But some changes
have been fairly easy to predict – and yet the product was designed in a way
that didn’t anticipate these changes. I can give lots of real world examples of
this from my experience, but I’ll generalize here. The cost of onboard digital storage
has and will continue to drop for most applications and devices. The
“connectedness” of devices and the ability to connect with other applications,
regardless of their execution platform, will continue to increase by
significant amounts. The same goes for their owning/sponsoring organizations.
Organizations in general will become more global in their sourcing and their
markets. More of the operational environment will become accessible and
controllable over a network connection, as well as potentially vulnerable to
malicious attack/use.
Are there likely to be exceptions to these trends? Yes, in niche
domains. Are there other general trends that might impact a system or
organization’s requirements? Sure. But that is what going through explicit
analysis of scope and explicit decision making about scope in a dynamic
environment is all about.
Hans
Dear Hans,
You wrote:
I’ll quibble a little with
your first point, in part because your own words later in the second sentence
undercuts it as well. The problem with requirements is that they do
change and adding complexity can help anticipate and ease that
change and associated product evolution. Requirements often change because the environment
changes (business, technology, social, etc.). Building only to the
stated requirements almost guarantees future obsolescence and
interoperability problems (although this may be ok depending on the time-scale
and cost of re-engineering), not to mention difficulty in reuse components in
other systems and contexts (which may also be ok – or not, as the case might
be). Sometimes it pays to anticipate future customer requirements, depending on
your product set and business model – and the dynamic nature of the
environment.
Your experiences must be very different from mine. I've
written lots of little programs myself and sold them, but if one person can
write a program, it isn't a very complex program anyway.
The experiences I had working in huge organizations (DoD,
Hughes, TRW) is that yes, change of requirements is a given because yes, the
external constraints change all the time. However, I disagree that
"adding complexity
can help anticipate and ease that change and associated
product evolution"
In my experience, system designs change, without
much ability to predict which changes, prior to the reality of
the changes. Perhaps in slowly evolving products, such as MS Office, or
Borland Delphi through the current replacement by some other company (I forget
which, but you get the point) an unusually technically knowledgeable management
might be able to predict some things about the future requirements, but my
experience includes several years working on reusable software back in the
80s. What I learned after several years of R&D is that concepts like
reuse are far trickier in practice than in promise. The amount of
predictable component reuse back then was pretty small. Now, with object
oriented architectures better understood, reuse is only a little bit
better.
So, in summary, I think the ability to predict changes
of requirements is very limited, and that predicting future
requirements is a very risky business strategy.
But I am open to deeper arguments you might want to try about
how the structure of software could be further objectized, and better generally
useful components for event handling (e.g., a top level event with derived
lower level event types having more detail) might be useful.
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
Rich,
I’ll quibble a little with your first point, in part because
your own words later in the second sentence undercuts it as well. The problem
with requirements is that they do change and adding complexity can help
anticipate and ease that change and associated product evolution. Requirements
often change because the environment changes (business, technology, social,
etc.). Building only to the stated requirements almost guarantees future
obsolescence and interoperability problems (although this may be ok depending
on the time-scale and cost of re-engineering), not to mention difficulty in
reuse components in other systems and contexts (which may also be ok – or not,
as the case might be). Sometimes it pays to anticipate future customer
requirements, depending on your product set and business model – and the
dynamic nature of the environment.
Which brings me back to the scope issue. Narrow scope is
certainly the correct approach in many contexts, just as broader scope may be
appropriate in others. The issue with scope is that too often it is left as
vague and implicit rather than as precise and explicit. I don’t think we’ll
ever get to a universal set of scales/dimensions for specifying scope, but we
should at least work towards such a construct, however messy/pragmatic it might
be. The alternative, it seems to me, is this ongoing drive for universality
while ignoring the scope issue – if my solution/ontology is universally
applicable, then I don’t need to worry about specifying scope. Sure it’s great
to have universal truths and principles that one can apply in every possible
situation, but that approach doesn’t scale and has limited applicability. It’s
a bit like trying to derive fluid dynamics from the quantum mechanics of
specific atoms under conditions in which they exhibit fluid behaviors – itself
a scope boundary setting challenge.
Hans
Dear Leo,
You wrote:
One can be coherent
philosophically about your ontological engineering product, and that product
could in fact be very simple, not complex. Or it could be complex. Complexity
is not the issue, per se, but the degree of complexity needed for your
application(s).
Yes, only that degree of complexity which is absolutely needed
to meet the requirements of the software should be tolerated in design if the
customer user is to be satisfied. Extra complexities make it harder to
use, and harder to maintain as things change, and the product evolves.
Coherency is good, so long as it fits within the user's 7+/-2
chunks. If adding a philosophical compliance also adds complexity, then
the tradeoff loses. But coherence is really a property of the user's view
of the product. Being a mental object, coherence is nearly impossible to
define, other than we don't want inconsistencies in the product.
But given that background from our two emails, can you generally
define an outcome based on your "coherence" ideas which
actually improves the user experience? If there is such a model of
coherence that is independent of any user, that would be a very significant
outcome. If you can share some vivid experience about a user's view of
coherence, that would help us find a solution to this issue.
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
One can be coherent philosophically about your ontological
engineering product, and that product could in fact be very simple, not
complex. Or it could be complex. Complexity is not the issue, per se, but the
degree of complexity needed for your application(s). Coherence, however, is
always a virtue. One of the main problems is a priori rejection of
knowledge that might make your product better, of more value. Practicality and
efficiency are of course of value to engineering, and to ontological
engineering. But a bad ontology is like a bad program, however succinct or
“fast” it is is besides the point if it is bad.
Thanks,
Leo
I think you make a good point here, Rich. I’ll be interested to
see the other responses to your email. However, this brings back the issue of
defining scope in a way that provides some assurance that the ontology in
question is being applied within that scope (and assuming the ontology is truly
valid throughout the defined scope – which I believe you intended when you said
“in all situations”).
Hans Polzer
Tom,
You wrote:
I have seen several remarks,
by the engineers among us, about ontology and semantics
being irrelevant to the work they do, being irrelevant, as you put it, to
"real engineering problems". But I have also seen the confusion
engineers create when they work with anything other than uncontroversial
ontology fragments, e.g. a company's product hierarchy.
No! You're missing the point about engineering. Philosophical
justifications for ontologies is what I, perhaps among others, disagree with.
But the need for an ontology within a complex software architecture, and
therefore the need for clear precise semantics for
interpreting that ontology's components, in all situations, is a primary
engineering concern.
It is only the philosophy part, the attempt to link application
ontologies to some overarching totality of existential ontology insisted upon
from that philosophical perspective that perturbs this engineer, likely
others. Its adding unnecessary complexity to
the architecture of the software, which should be minimized, not
expanded.
Every addition of one more component to an ontology drives its
complexity up in an exponential curve. Not a good thing for developing
software especially. So adding even more components having only
philosophical justification and not specifically application
justification is the wrong direction, IMHO.
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
From: Thomas Johnston [mailto:tmj44p@xxxxxxx]
Sent: Tuesday, October 13, 2015 6:05 PM
To: Rich Cooper; '[ontolog-forum] '
Subject: Re: [ontolog-forum] A Question About Mathematical Logic
Like an earlier comment, yours
emphasizes, I believe, the need to discuss (i) the difference between formal
ontology and ontology engineering (which is roughly the difference between
theory and practice), and (ii) the problems that arise when ontology engineers
finding themselves having to do ontology, rather than having to just plug
uncontroversial mini-ontologies into some well-defined framework (like Protege)
or into a framework/template toolkit like OWL/RDF. I intend to do this in a new
thread, and soon.
I have seen several remarks, by the
engineers among us, about ontology and semantics being irrelevant to the work
they do, being irrelevant, as you put it, to "real engineering
problems". But I have also seen the confusion engineers create when they
work with anything other than uncontroversial ontology fragments, e.g. a
company's product hierarchy.
As an ontologist, and a person
somewhat familiar with systems of logic, I nonetheless appreciate the
importance of getting ontologies into frameworks. That, in my opinion, is what
puts the semantics in the Semantic Web -- it gives automated systems, doing
cross-database queries, the ability to understand cross-database semantics.
(Pat Hayes to correct me, please, if I'm off course here.)
An example I have come across in
every one of two dozen enterprises I have worked for, is the question:
"What is a customer?", where that question, more fully, means
"What does your enterprise take a customer of yours to be?" I have
never found subject matter experts who have been able to answer that question,
without a good deal of help from me. And the help I provide is help in doing
ontology clarification work, not help in plugging lexical items representing
ontological categories into an ontology tool. Moreover, I have never found two
enterprises whose experts defined "customer" in exactly the same way.
From which it follows that a
cross-database query that assumes that two tables named "Customer
Table", in two different enterprise's databases, are both about customers,
is almost certain to be mistaken. Both tables may be about fruit, but there is
certain to be an apples and oranges issue there.
A formal ontology which includes
customers, on the other hand, might be able to distinguish apples from oranges
if it could access an ontology framework about customers. Given that the
concepts have been correctly and extensively-enough clarified, here is where
the ontology engineer proves his worth.
But to define the category Customer
clearly enough, it isn't engineering work that needs to be done. It's the far
more difficult (in my opinion) ontology clarification work that needs to be
done. (I expand on this example in the section "On Using Ontologies",
pp. 73-74 in my book Bitemporal Data: Theory and Practice. I think I also elaborated
on it a few weeks or months ago, here at Ontolog.)
So I think that engineers who suggest
that clarifying ontological categories is irrelevant to their work as ontology
engineers, are mistaken. Such work seems mistaken to them, I think, because most
of the ontologies they put into their well-defined frameworks are relatively
trivial, i.e. are ontologies that subject matter experts have no trouble
agreeing on. The lower-level the ontologies we engineer, the more that will
tend to be the case.
But ascend into mid-level or
upper-level ontologies, and ontology engineers get lost, and don't know how to
find a clear path through the forest whose trees are those categories. And so
instead of admitting "We're lost", they say instead "We strayed
into a swamp that has nothing to do with the real engineering work we do --
which turns out to be the relatively straightforward work of plugging labels
for uncontroversial ontological categories, and taxonomies thereof, into
Protege or its like".
I say, on the contrary, that
conceptual clarification work in mid- and upper-level ontologies have
everything to do with ontology engineering, and are where the really difficult
work of that engineering is done. An analogy: machine-tooling parts is the hard
work of manufacturing; assembling those parts is the easy work.
And my apologies to Leo, Pat and
other whose comments on my question I have not yet responded to. I will, and
soon. And I thank them and all other respondents for helping me think through
the question I raised.
Although the approach you are
suggesting might entertain some philosophical questions, and therefore be
entertaining to philosophers, it has little or no relevance to real engineering
problems, which almost never are applied to the actual universe of every
possible entity - i.e. infinite supplies.
In engineering applications, Ex(...)
would normally apply only to finite sized, or traversably infinite sized,
problems. The importance of scope in engineering, i.e., where you draw
the lines around what is a system, which contains all the entities, enumerators
of variables, constants and functions in real problems.
Even unbounded engineering problems
have limits to the possible types that can be used, though mechanisms like
stacks, or even Turing machines with infinite square supplies, attempt to
approximate boundless sizes.
So I suggest your title should be A
Question About Mathematical Logic, since engineers who consider themselves
logic designers would find the ideas impractical, though linguists might be
more interested.
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel
DOT com
Suppose someone else asserts,
instead, that "No dogs are renates". Certainly, to do that, that
person must believe that there are such things as dogs and, in addition,
believe that none of them are renates (a false belief, of course).
On Tuesday, October 13, 2015 11:57
AM, Thomas Johnston <tmj44p@xxxxxxx>
wrote:
My intuitions tell me that anyone who
asserts "All dogs are renates" believes that there are dogs (i.e. is
ontologically committed to the existence of dogs) just as much as someone who
asserts "Some dogs are friendly".
Suppose someone else asserts,
instead, that "No dogs are renates". Certainly, to do that, that
person must believe that there are such things as dogs and, in addition,
believe that some of them are not renates (a false belief, of course).
Now for "Some dogs are
friendly", and also "Some dogs are not friendly". In both cases,
we all seem to agree, someone making those assertions believes that there are
dogs.
Now I'm quite happy about all this.
If I make a Gricean-rule serious assertion by using either the "All"
quantification or the "Some" quantification, I'm talking about
whatever is the subject term in those quantifications – dogs in this case. I'm
particularly happy that negation, as it appears in the deMorgan's translations
between "All" statements and "Some" statements, doesn't
claim that a pair of statements are semantically equivalent, in which one of
the pair expresses a belief that dogs exist but the other does not.
But in the standard interpretation of
predicate logic, that is the interpretation. In the standard interpretation,
negating a statement creates or removes the _expression_ of a belief that
something exists. My beliefs in what exist can't be changed by the use of the
negation operator. Apparently, John's beliefs can, and so too for everyone else
who feels comfortable with predicate logic as a formalization of commonsense
reasoning, and with the interpretation of one of its operators as "There
exists ....".
I usually don't like getting into tit
for tats. Those kinds of discussions always are about trees, and take attention
away from the forest. But I'll make exceptions when I think it's worth taking
that risk (as I did in my response to Ed last night).
From John Sowa's Oct 12th response:
TJ
>
why, in the formalization of predicate logic, was it decided
>
that "Some X" would carry ontological commitment
Nobody
made that decision. It's a fact of perception. Every
observation
can always be described with just two operators:
existential
quantifier and conjunction. No other operators can
be
observed. They can only be inferred.
(1) If all ontological commitments
have to be based on direct observation, then we're right back to the Vienna
Circle and A. J. Ayer.
(2) And what is it that we directly
observe? A dog in front of me? Dogs, as Quine once pointed out, are ontological
posits on a par with the Greek gods, or with disease-causing demons. (I am
aware that this point, in particular, will likely serve to reinforce the
belief, on the part of many engineering types in this forum, that philosophy
has nothing to do with ontology engineering. That's something I want to discuss
in a "contextualizing discussion" I want to have before I pester the
members of this forum with questions and hypotheses about cognitive/diachronic
semantics. What does talk like that have to do with building real-world
ontologies in ontology tools, in OWL/RDF – ontologies that actually do
something useful in the world?
(3) I wouldn't talk about some dogs
unless I believed that some dogs exist. And if some dogs exist, then all dogs
do, too. Either there are dogs, or there aren't. If there are, then I can talk
about some of them, or about all of them. If there aren't, then unless I am
explicitly talking about non-existent things, I can't talk about some of them
nor can I talk about all of them, for the simple reason that none of them
exist. To repeat myself: if any of them exist, then all of them do.
(4) And I am, of course, completely
aware that trained logicians since Frege have been using predicate logic, and
that, at least since deMorgan, have been importing to negation the power to
create and remove ontological commitment.
(5) Here's a quote from Paul Vincent
Spade (very important
guy in medieval logic and semantics):
"This doctrine of “existential
import” has taken a lot of silly abuse in the twentieth century. As you may
know, the modern reading of universal affirmatives construes them as quantified
material conditionals. Thus ‘Every S is P’ becomes (x)(Sx ⊃ Px), and is true, not false, if there are
no S’s. Hence (x)(Sx ⊃ Px) does not
imply (∃x)(Sx). And that
is somehow supposed to show the failure of existential import. But it doesn’t
show anything of the sort .... "
So Spade approaches this as the issue
of the existential import of universally quantified statements. He points out
that, from Ux(Dx --> Rx), we cannot infer Ex(Dx & Rx). The rest of the
passage attempts to explain why. I still either don't understand his argument,
or I'm not convinced by it. Why should "All dogs are renates" not be
expressed as Ux(Dx & Rx)?
From John's reply, I think he would
say that it's because we can only observe particular things; we can't observe
all things. But in the preceding points, I've tried to say why I don't find
that convincing.
(6) Simply the fact that decades of
logicians have not raised the concerns I have raised strongly suggests that I
am mistaken, and need to think more clearly about logic and ontological
commitment. But there is something that might make one hesitate to jump right
to that conclusion. It's Kripke's position on analytic a posteriori statements
(which I have difficulty distinguishing from Kant's synthetic a priori
statements, actually -- providing we assume that the metaphors of
"analytic" as finding that one thing is "contained in"
another thing, and of "synthetic" as bringing together two things
first experienced as distinct, are just metaphors, and don't work as solid
explanations).
All analytic statements are
"All" statements, not "Some" statements. Kripke suggests
that the statement "Water is H2O" is analytic but a posteriori. In
general, that "natural kind" statements are all of this sort. Well, a
posteriori statements are ones verified by experience, and so that would take
care of John's Peircean point that only "Some" statements are
grounded in what we experience.
I don't know how solid this line of
thought is. But if there is something to it, that might suggest that if we
accept Kripke's whole referential semantics / rigid designator / natural kinds
ideas (cf. Putnam's twin earth thought experiment also), then perhaps we should
rethink the traditional metalogical interpretation of "All dogs are renates"
as Ux(Dx --> Rx), and consider, instead, Ux(Dx & Rx).
Well, two summing-up points. The
first is that Paul Vincent Spade thinks that my position is "silly",
and John Sowa thinks that it's at least wrong. The second is that such
discussions do indeed take us beyond the concerns of ontology engineers, who
just want to get on with building working ontologies.
As I said above, I will address those
concerns of ontology engineers before I begin discussing cognitive semantics in
this Ontolog (Ontology + Logic) forum.
Tom, Ed, Leo, Paul, Henson,
TJ
> why, in the formalization of predicate logic, was it decided
> that "Some X" would carry ontological commitment
Nobody made that decision. It's a fact of perception. Every
observation can always be described with just two operators:
existential quantifier and conjunction. No other operators can
be observed. They can only be inferred.
EJB
> I was taught formal logic as a mathematical discipline, not
> a philosophical discipline. I do not believe that mathematics
> has any interest in ontological commitment.
That's true. And most of the people who developed formal logic
in the 20th c were mathematicians. They didn't worry about
the source or reliability of the starting axioms.
Leo
> most ontologists of the realist persuasion will argue that there
> are no negated/negative ontological things.
Whatever their persuasion, nobody can observe a negation. It's
always an inference or an assumption.
PT
> on the inadequacy of mathematical logic for reasoning about
> the real world, see Veatch, "Intentional Logic: a logic based on
> philosophical realism".
Many different logics can be and have been formalized for various
purposes. They may have different ontological commitments built in,
but the distinction of what is observed or inferred is critical.
HG
> I keep wondering if this forum has anything useful to offer the
> science and engineering community.
C. S. Peirce was deeply involved in experimental physics and
engineering. He was also employed as an associate editor of the
_Century Dictionary_, for which he wrote, revised, or edited over
16,000 definitions. My comments below are based on CSP's writings:
1. Any sensory perception is evidence that something exists;
a simultaneous perception of something A and something B
is evidence for (Ex)(Ey)(A(x) & B(y)).
2. Evidence for other operators must *always* be an inference:
(a) Failure to observe P(x) does not mean there is no P.
Example: "There is no hippopotamus in
this room"
can only be inferred iff you have failed to observe
a hippo and know that it is big enough that you
would
certainly have noticed one if it were present.
(b) (p or q) cannot be directly observed. But you might
infer
that a particular observation (e.g. "the room
is lighted")
could be the result of two or more sources.
(c) (p implies q) cannot be observed, as Hume discussed at
length.
(d) a universal quantifier can never be observed. No matter
how many examples of P(x) you see, you can never
know that
you've seen them all (unless you have other information
that guarantees you have seen them all).
TJ
> But now notice something: negation creates and removes ontological
> commitment. And this seems really strange. Why should negation do this?
The commitment is derived from the same background knowledge that
enabled you to assert (or prevented you from asserting) the negation.
> I'd also like to know if there are formal logics which do not
> impute this extravagant power of ontological commitment /
> de-commitment to the negation operator in predicate logics.
Most formal logicians don't think about these issues -- for the
simple reason that most of them are mathematicians. They don't
think about observation and evidence.
CSP realized the problematical issues with negation, but he also
knew that he needed to assume at least one additional operator.
And negation was the simplest of the lot. Those are the three
he assumed for his existential graphs. (But he later added
metalanguage, modality, and three values -- T, F, and Unknown.)
John
PS: The example "There is no hippopotamus in this room" came
from
a remark by Bertrand Russell that he couldn't convince Wittgenstein
that there was no hippopotamus in the room. Russell didn't go
into any detail, but I suspect that Ludwig W. was trying to
explain the point that a negation cannot be observed.
|
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J (01)
|