Yes, it would seem that the use of ontology to achieve interoperability
isn't a promising approach, given what we have discussed over the long
past.
Instead, lets back up a notch and rethink it. What we want
is interoperability. How can we get it, other than the usual
make-the-systems-interoperable we've been trying with so little success?
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Obrst, Leo
J.
Sent: Thursday, October 15, 2015 1:44 PM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] A Question About Mathematical Logic
Unfortunately, as you and others have pointed out, it is not
just government, but business/commerce as usual. For advanced capitalism, there
are still many feudal aspects carried forward. Rants about socialism aside,
there are still systemic errors in the way both commercial business and
government business gets done.
We try to address technical impediments to interoperability,
both data and system in nature, by advocating higher-level semantic
interoperability via use of ontologies, which can indeed help solve those issues.
But underneath are the other aspects of interoperability, syntactic,
structural, programmatic, and of course, the hardest problem, sociological
issues, i.e., people and their organizations and those organizations’ behaviors
and procedures. The cortex still has the reptilian brain underneath: both a
blessing and a curse. And we monkey people are still dancing on the savanna,
glad to have come down from the trees and picked up weapons.
Thanks,
Leo
From:
ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx]
On Behalf Of Rich Cooper
Sent: Thursday, October 15, 2015 3:40 PM
To: '[ontolog-forum] ' <ontolog-forum@xxxxxxxxxxxxxxxx>
Subject: Re: [ontolog-forum] A Question About Mathematical Logic
Dear Hans,
You wrote:
Given all that, the approach
I generally recommend is to increase scope during early concept development and
architectural framework design, but shrink scope during actual
development/implementation to the immediate needs of the program, but
in such a way that the larger scope is kept in mind so as not to make design
and implementation decisions that preclude or needlessly complicate the
implementation of the larger scope later on in program evolution.
You wrote further:
The Internet mantra version of
this is “Think big, start small, scale fast”, although the scale fast part is a
bit problematic from a government business model perspective – i.e., there is
no source of venture capital or growing product revenue stream that can be fed
back into a program – except in the case of rotating capital fund entities (I was
involved in one such effort in DoD and it was very successful precisely because
it had this ability to fund itself rather than relying strictly on budgeted
funds) .
That is indeed at least one of the crucial missing ingredients
to run govt contractors in the internet mode of efficiency and extreme
investments. But also, the contractors get a tiny percentage of the
system cost as their final profit - the gov't is terrifically motivated to CYA
by blaming any missing features on the contractors, and that requires
exhaustive documentation to prove in court.
John Sowa once suggested that the gov't be given some profit
making functions. If that could be done, it might provide an incentive
for some gov't contractors to take risks, but they would have to also get
commensurate rewards to make those risks worth taking.
I don't have much confidence in a profit making gov't per se
though, given how badly the supposedly self funding Patent and Trademark Office
has been robbed of its fees and undermined by congress to the point of causing
long, commercially unacceptable delays in patent processing. Given just
that one example, it hasn't worked. But there are semi commercial gov't
entities, like the TVA, which at one time at least were profit making.
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
Leo, Rich:
There are valid reasons to take both positions, depending on the
business model context. There are reasons I am often called the SCOPE creep –
adding scope and anticipating change does increase complexity, schedule, and,
of course, cost. The issue is that in certain business contexts, most
government system development contracts being one of them, there is essentially
no way to compensate those who make the initial investment on system “A” by the
developers of systems “B”, “C”, etc., who might benefit from that initial
investment by system “A” – but which is not strictly needed to accomplish
system “A”’s mission. There have been several industry association study
reports to DoD on this point, but the acquisition laws and congressional
politics/sociology make it very difficult to pragmatically overcome this business
model problem. Essentially the problem is that the government is not structured
to do business with itself on a bottom up basis (rotating capital accounts are
the exception – and generally frowned upon by congress). It all has to be done
top down through the programming and budgeting process, i.e., central planning,
and there is an inherent bias against making programs too broad in scope (for
good reason).
Given all that, the approach I generally recommend is to
increase scope during early concept development and architectural framework
design, but shrink scope during actual development/implementation to the
immediate needs of the program, but in such a way that the larger scope is kept
in mind so as not to make design and implementation decisions that preclude or
needlessly complicate the implementation of the larger scope later on in
program evolution. The Internet mantra version of this is “Think big, start
small, scale fast”, although the scale fast part is a bit problematic from a
government business model perspective – i.e., there is no source of venture
capital or growing product revenue stream that can be fed back into a program –
except in the case of rotating capital fund entities (I was involved in one
such effort in DoD and it was very successful precisely because it had this
ability to fund itself rather than relying strictly on budgeted funds) .
Hans
Sorry, Rich, this is a method to perpetuate silos, not to
address, at least partially, interoperability issues with every project. In the
DoD (and I worked at Boeing at one time), the typical contractor problem (or
one of those) is that you don’t generalize or consider interoperability issues
on any given budget/funding. You blindly follow today’s requirements, provide a
system that approximates those. Then you do it again, and again, and again.
It’s a way to keep getting funded, because everyone at some point needs your
stuff. So you create silo after silo after silo. Then at some point another
program comes up that focuses on interoperability, and then it pays for the
vast interoperability issues based on a hundred silos. And guess what, some of
the same companies that created the silos now get to correct their lack of
interoperability. Which is at that point, much more complex, and much more
costly, and typically fails.
Engineering with blinders and halters is not good engineering.
Thanks,
Leo
Dear Tom,
Your emails are showing very, very small text fonts. So I
copied an repasted in larger font to make it more readable. You wrote:
But if later on, someone
wants to create a unified ontology of income-producing instruments for a
vertical industry group, then each company's ontology fragment has to be
integrated/bridged to the others.
The financial issue is who is going to pay for the later on
work. That is, by definition, a change in the requirements. It is
that change which I was speaking about in the earlier emails. I
understand and agree that situations like the one you described are
common. But the original development should not be burdened with the
requirement, which sacrifices costs, schedules, functionality and performance
of the software system. I don't believe in the practice of "paying
ahead" by adding the extra cost, schedule, complexity, computing
load, storage areas, and representations needed to add the extra functions that
just MIGHT be needed in the future, but aren't needed at the first
development.
I have no argument with using philosophical concepts (even the
Xdurants, which nobody but philosophers care about), so long as they bring an immediate
benefit, not a possible, forecast, to be
considered, future requirement.
If you could come up with a scenario showing why it is useful in
the FIRST system, before the requirement actually does change, and justify it
by that benefit alone, not by the possible future benefit which may never
arrive.
Can you show better functionality, or better completeness,
or some such measure of effectiveness (MOE) on the FIRST system for adding that
stuff?
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
From: Thomas Johnston [mailto:tmj44p@xxxxxxx]
Sent: Thursday, October 15, 2015 9:29 AM
To: Rich Cooper; '[ontolog-forum] '
Subject: Re: [ontolog-forum] A Question About Mathematical Logic
I don't think I have misunderstood the
engineers in this thread, and I haven't seen any comments yet that persuade me
I have.
Your "Philosophy", if it's in
reference to my comments, is formal ontology, especially upper-level ontology.
I do agree that lower-level ontology fragments -- product hierarchies is an
example I often use -- can stand alone, without any "philosophical justification".
But if later on, someone wants to create a unified ontology of income-producing
instruments for a vertical industry group, then each company's ontology
fragment has to be integrated/bridged to the others. Suppose company X doesn't
distinguish between products (e.g. tile for a bathroom) and services (the installation
of those tiles in a customer's home); X has an ontology which includes ":installed
bathroom flooring", but company Y, on the other hand, has no such item in
its inventory. For Y, floor tiling is a product, and installation is a service
that uses that product.
The "philosophical" question
here is whether the mid-level ontology (for the vertical industry group) should
include "installed product" as an ontological category. Whether or
not it does, X and the other companies who use "installed product" as
a category, or Y and the other companies who do not, will have to do a lot of
database re-engineering in order to conform to the industry group ontology. "Doing
philosophy", in the sense of establishing ontologies, gets right down to
nitty-gritty software engineering work for the engineers who thought that
high-flown upper-level ontology work was irrlevant to the practical work that
engineers do.
Later on, even upper-level ontologies
can become relevant to such nitty-gritty work as re-engineering databases. In scientific
databases, it matters whether space and time are represented as discrete or
continuous. With respect to my own work, I have suggested that distinguishing
three temporal dimensions in databases enables us to record and retrieve
important information that the ISO-standard two temporal dimensions cannot. And
I would emphasize that introducing a third temporal dimension was not just a
matter of adjusting software to manage temporal triples instead of temporal
pairs. If that's all it were, then a software engineer might think that every
date/time piece of metadata for the rows of a database table would constitute a
new temporal "dimension"; and that outlook has lighted many fools the
way to dusty software death.
Instead, a third temporal dimension is
introduced based on the "philosophical" distinction between (i) inscriptions
of declarative sentences (rows in tables), (ii) the statement that multiple
copies of the same row are inscriptions of; (iii) the propositions that
synonymous statements are expressions of; and (iv) the propositional attitudes
that users of databases express when they update databases, and presuppose when
they query those databases. It is also based on an extension of Aristotle's
basic ontology, an extension which I describe in Chapter 5 of my oft-alluded-to
book.
This is doing philosophy, in anyone's
book. It is what led me to recognize the existence of a new ontological category
-- the temporal dimension I called "speech-act time" in my book.
Ontology engineers who plug in lexical items for concepts in non-controversial
fragments of ontologies, don't have to do ontology. In that, I agree with you.
But once those engineers are tasked with extending their constructs beyond the
non-controversial scope of those constructs, they get in trouble (cf my "customer"
example, also my "installed product" and third temporal dimension
discussed in this comment). They get in trouble because they are then
confronted with the need to do ontology; they enter an arena in which they will
be forced to "do philosophy" -- which is something that they feel in
their guts (and have often expressed in this forum) is irrelevant to the "REAL"
work that engineers do.
I don't expect to convince you. But I
do believe it's worth trying to say, as clearly as I can, what my understanding
of the issue of doing formal ontology vs. doing ontology engineering is. My understanding
is that both are important, that, to paraphrase Kant, "ontology without
engineering is empty; engineering without ontology is blind".
I have seen several remarks, by the engineers
among us, about ontology and semantics being irrelevant to
the work they do, being irrelevant, as you put it, to "real engineering problems".
But I have also seen the confusion engineers create when they work with
anything other than uncontroversial ontology fragments, e.g. a company's
product hierarchy.
No! You're missing the point about
engineering. Philosophical justifications for ontologies is
what I, perhaps among others, disagree with. But the need for an
ontology within a complex software architecture, and therefore the need for
clear precise semantics for interpreting that ontology's components, in
all situations, is a primary engineering concern.
It is only the philosophy part, the attempt
to link application ontologies to some overarching totality of existential
ontology insisted upon from that philosophical perspective that perturbs this
engineer, likely others. Its adding unnecessary complexity to the
architecture of the software, which should be minimized, not expanded.
Every addition of one more component to
an ontology drives its complexity up in an exponential curve. Not a good
thing for developing software especially. So adding even more components
having only philosophical justification and not specifically application
justification is the wrong direction, IMHO.
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT
com
From: Thomas Johnston
[mailto:tmj44p@xxxxxxx]
Sent: Tuesday, October 13, 2015 6:05 PM
To: Rich Cooper; '[ontolog-forum] '
Subject: Re: [ontolog-forum] A Question About Mathematical Logic
Like an earlier comment, yours emphasizes,
I believe, the need to discuss (i) the difference between formal ontology and
ontology engineering (which is roughly the difference between theory and
practice), and (ii) the problems that arise when ontology engineers finding
themselves having to do ontology, rather than having to just plug uncontroversial
mini-ontologies into some well-defined framework (like Protege) or into a
framework/template toolkit like OWL/RDF. I intend to do this in a new thread,
and soon.
I have seen several remarks, by the engineers
among us, about ontology and semantics being irrelevant to the work they do,
being irrelevant, as you put it, to "real engineering problems". But
I have also seen the confusion engineers create when they work with anything
other than uncontroversial ontology fragments, e.g. a company's product
hierarchy.
As an ontologist, and a person somewhat
familiar with systems of logic, I nonetheless appreciate the importance of
getting ontologies into frameworks. That, in my opinion, is what puts the
semantics in the Semantic Web -- it gives automated systems, doing cross-database
queries, the ability to understand cross-database semantics. (Pat Hayes to
correct me, please, if I'm off course here.)
An example I have come across in every
one of two dozen enterprises I have worked for, is the question: "What is
a customer?", where that question, more fully, means "What does your
enterprise take a customer of yours to be?" I have never found subject
matter experts who have been able to answer that question, without a good deal
of help from me. And the help I provide is help in doing ontology clarification
work, not help in plugging lexical items representing ontological categories
into an ontology tool. Moreover, I have never found two enterprises whose
experts defined "customer" in exactly the same way.
From which it follows that a cross-database
query that assumes that two tables named "Customer Table", in two
different enterprise's databases, are both about customers, is almost certain
to be mistaken. Both tables may be about fruit, but there is certain to be an
apples and oranges issue there.
A formal ontology which includes customers,
on the other hand, might be able to distinguish apples from oranges if it could
access an ontology framework about customers. Given that the concepts have been
correctly and extensively-enough clarified, here is where the ontology engineer
proves his worth.
But to define the category Customer clearly
enough, it isn't engineering work that needs to be done. It's the far more
difficult (in my opinion) ontology clarification work that needs to be done. (I
expand on this example in the section "On Using Ontologies", pp.
73-74 in my book Bitemporal Data: Theory and Practice. I think I also elaborated
on it a few weeks or months ago, here at Ontolog.)
So I think that engineers who suggest
that clarifying ontological categories is irrelevant to their work as ontology engineers,
are mistaken. Such work seems mistaken to them, I think, because most of the
ontologies they put into their well-defined frameworks are relatively trivial,
i.e. are ontologies that subject matter experts have no trouble agreeing on.
The lower-level the ontologies we engineer, the more that will tend to be the
case.
But ascend into mid-level or upper-level
ontologies, and ontology engineers get lost, and don't know how to find a clear
path through the forest whose trees are those categories. And so instead of
admitting "We're lost", they say instead "We strayed into a
swamp that has nothing to do with the real engineering work we do -- which
turns out to be the relatively straightforward work of plugging labels for
uncontroversial ontological categories, and taxonomies thereof, into Protege or
its like".
I say, on the contrary, that conceptual
clarification work in mid- and upper-level ontologies have everything to do
with ontology engineering, and are where the really difficult work of that
engineering is done. An analogy: machine-tooling parts is the hard work of
manufacturing; assembling those parts is the easy work.
And my apologies to Leo, Pat and other
whose comments on my question I have not yet responded to. I will, and soon.
And I thank them and all other respondents for helping me think through the
question I raised.
Although the approach you are suggesting
might entertain some philosophical questions, and therefore be entertaining to
philosophers, it has little or no relevance to real engineering problems, which
almost never are applied to the actual universe of every possible entity - i.e.
infinite supplies.
In engineering applications, Ex(...) would
normally apply only to finite sized, or traversably infinite sized, problems.
The importance of scope in engineering, i.e., where you draw the lines around
what is a system, which contains all the entities, enumerators of variables,
constants and functions in real problems.
Even unbounded engineering problems have
limits to the possible types that can be used, though mechanisms like stacks,
or even Turing machines with infinite square supplies, attempt to approximate
boundless sizes.
So I suggest your title should be A Question
About Mathematical Logic, since engineers who consider themselves logic designers
would find the ideas impractical, though linguists might be more interested.
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT
com
Suppose someone else asserts, instead,
that "No dogs are renates". Certainly, to do that, that person must
believe that there are such things as dogs and, in addition, believe that none
of them are renates (a false belief, of course).
On Tuesday, October 13, 2015 11:57 AM,
Thomas Johnston <tmj44p@xxxxxxx> wrote:
My intuitions tell me that anyone who
asserts "All dogs are renates" believes that there are dogs (i.e. is
ontologically committed to the existence of dogs) just as much as someone who
asserts "Some dogs are friendly".
Suppose someone else asserts, instead,
that "No dogs are renates". Certainly, to do that, that person must
believe that there are such things as dogs and, in addition, believe that some
of them are not renates (a false belief, of course).
Now for "Some dogs are friendly",
and also "Some dogs are not friendly". In both cases, we all seem to
agree, someone making those assertions believes that there are dogs.
Now I'm quite happy about all this. If
I make a Gricean-rule serious assertion by using either the "All" quantification
or the "Some" quantification, I'm talking about whatever is the
subject term in those quantifications – dogs in this case. I'm particularly
happy that negation, as it appears in the deMorgan's translations between "All"
statements and "Some" statements, doesn't claim that a pair of
statements are semantically equivalent, in which one of the pair expresses a
belief that dogs exist but the other does not.
But in the standard interpretation of
predicate logic, that is the interpretation. In the standard interpretation, negating
a statement creates or removes the _expression_ of a belief that something
exists. My beliefs in what exist can't be changed by the use of the negation
operator. Apparently, John's beliefs can, and so too for everyone else who
feels comfortable with predicate logic as a formalization of commonsense reasoning,
and with the interpretation of one of its operators as "There exists
....".
I usually don't like getting into tit
for tats. Those kinds of discussions always are about trees, and take attention
away from the forest. But I'll make exceptions when I think it's worth taking that
risk (as I did in my response to Ed last night).
From John Sowa's Oct 12th response:
TJ
>
why, in the formalization of predicate logic, was it decided
>
that "Some X" would carry ontological commitment
Nobody
made that decision. It's a fact of perception. Every
observation
can always be described with just two operators:
existential
quantifier and conjunction. No other operators can
be
observed. They can only be inferred.
(1) If all ontological commitments have
to be based on direct observation, then we're right back to the Vienna Circle
and A. J. Ayer.
(2) And what is it that we directly observe?
A dog in front of me? Dogs, as Quine once pointed out, are ontological posits
on a par with the Greek gods, or with disease-causing demons. (I am aware that
this point, in particular, will likely serve to reinforce the belief, on the
part of many engineering types in this forum, that philosophy has nothing to do
with ontology engineering. That's something I want to discuss in a "contextualizing
discussion" I want to have before I pester the members of this forum with
questions and hypotheses about cognitive/diachronic semantics. What does talk
like that have to do with building real-world ontologies in ontology tools, in
OWL/RDF – ontologies that actually do something useful in the world?
(3) I wouldn't talk about some dogs unless
I believed that some dogs exist. And if some dogs exist, then all dogs do, too.
Either there are dogs, or there aren't. If there are, then I can talk about
some of them, or about all of them. If there aren't, then unless I am explicitly
talking about non-existent things, I can't talk about some of them nor can I
talk about all of them, for the simple reason that none of them exist. To
repeat myself: if any of them exist, then all of them do.
(4) And I am, of course, completely aware
that trained logicians since Frege have been using predicate logic, and that,
at least since deMorgan, have been importing to negation the power to create
and remove ontological commitment.
(5) Here's a quote from Paul Vincent Spade
(very important guy in
medieval logic and semantics):
"This doctrine of “existential import”
has taken a lot of silly abuse in the twentieth century. As you may know, the
modern reading of universal affirmatives construes them as quantified material
conditionals. Thus ‘Every S is P’ becomes (x)(Sx ⊃ Px), and is true, not false, if there are
no S’s. Hence (x)(Sx ⊃ Px) does not
imply (∃x)(Sx). And that
is somehow supposed to show the failure of existential import. But it doesn’t show anything
of the sort .... "
So Spade approaches this as the issue
of the existential import of universally quantified statements. He points out that,
from Ux(Dx --> Rx), we cannot infer Ex(Dx & Rx). The rest of the passage
attempts to explain why. I still either don't understand his argument, or I'm
not convinced by it. Why should "All dogs are renates" not be expressed
as Ux(Dx & Rx)?
From John's reply, I think he would say
that it's because we can only observe particular things; we can't observe all
things. But in the preceding points, I've tried to say why I don't find that
convincing.
(6) Simply the fact that decades of logicians
have not raised the concerns I have raised strongly suggests that I am
mistaken, and need to think more clearly about logic and ontological commitment.
But there is something that might make one hesitate to jump right to that
conclusion. It's Kripke's position on analytic a posteriori statements (which I
have difficulty distinguishing from Kant's synthetic a priori statements,
actually -- providing we assume that the metaphors of "analytic" as
finding that one thing is "contained in" another thing, and of "synthetic"
as bringing together two things first experienced as distinct, are just
metaphors, and don't work as solid explanations).
All analytic statements are "All"
statements, not "Some" statements. Kripke suggests that the statement
"Water is H2O" is analytic but a posteriori. In general, that "natural
kind" statements are all of this sort. Well, a posteriori statements are
ones verified by experience, and so that would take care of John's Peircean
point that only "Some" statements are grounded in what we experience.
I don't know how solid this line of thought
is. But if there is something to it, that might suggest that if we accept
Kripke's whole referential semantics / rigid designator / natural kinds ideas
(cf. Putnam's twin earth thought experiment also), then perhaps we should rethink
the traditional metalogical interpretation of "All dogs are renates" as
Ux(Dx --> Rx), and consider, instead, Ux(Dx & Rx).
Well, two summing-up points. The first
is that Paul Vincent Spade thinks that my position is "silly", and
John Sowa thinks that it's at least wrong. The second is that such discussions
do indeed take us beyond the concerns of ontology engineers, who just want to
get on with building working ontologies.
As I said above, I will address those
concerns of ontology engineers before I begin discussing cognitive semantics in
this Ontolog (Ontology + Logic) forum.
Tom, Ed, Leo, Paul, Henson,
TJ
> why, in the formalization of predicate logic, was it decided
> that "Some X" would carry ontological commitment
Nobody made that decision. It's a fact of perception. Every
observation can always be described with just two operators:
existential quantifier and conjunction. No other operators can
be observed. They can only be inferred.
EJB
> I was taught formal logic as a mathematical discipline, not
> a philosophical discipline. I do not believe that mathematics
> has any interest in ontological commitment.
That's true. And most of the people who developed formal logic
in the 20th c were mathematicians. They didn't worry about
the source or reliability of the starting axioms.
Leo
> most ontologists of the realist persuasion will argue that there
> are no negated/negative ontological things.
Whatever their persuasion, nobody can observe a negation. It's
always an inference or an assumption.
PT
> on the inadequacy of mathematical logic for reasoning about
> the real world, see Veatch, "Intentional Logic: a logic based on
> philosophical realism".
Many different logics can be and have been formalized for various
purposes. They may have different ontological commitments built in,
but the distinction of what is observed or inferred is critical.
HG
> I keep wondering if this forum has anything useful to offer the
> science and engineering community.
C. S. Peirce was deeply involved in experimental physics and
engineering. He was also employed as an associate editor of the
_Century Dictionary_, for which he wrote, revised, or edited over
16,000 definitions. My comments below are based on CSP's writings:
1. Any sensory perception is evidence that something exists;
a simultaneous perception of something A and something B
is evidence for (Ex)(Ey)(A(x) & B(y)).
2. Evidence for other operators must *always* be an inference:
(a) Failure to observe P(x) does not mean there is no P.
Example: "There is no hippopotamus in this
room"
can only be inferred iff you have failed to observe
a hippo and know that it is big enough that you would
certainly have noticed one if it were present.
(b) (p or q) cannot be directly observed. But you might infer
that a particular observation (e.g. "the room is
lighted")
could be the result of two or more sources.
(c) (p implies q) cannot be observed, as Hume discussed at length.
(d) a universal quantifier can never be observed. No matter
how many examples of P(x) you see, you can never know
that
you've seen them all (unless you have other information
that guarantees you have seen them all).
TJ
> But now notice something: negation creates and removes ontological
> commitment. And this seems really strange. Why should negation do this?
The commitment is derived from the same background knowledge that
enabled you to assert (or prevented you from asserting) the negation.
> I'd also like to know if there are formal logics which do not
> impute this extravagant power of ontological commitment /
> de-commitment to the negation operator in predicate logics.
Most formal logicians don't think about these issues -- for the
simple reason that most of them are mathematicians. They don't
think about observation and evidence.
CSP realized the problematical issues with negation, but he also
knew that he needed to assume at least one additional operator.
And negation was the simplest of the lot. Those are the three
he assumed for his existential graphs. (But he later added
metalanguage, modality, and three values -- T, F, and Unknown.)
John
PS: The example "There is no hippopotamus in this room" came from
a remark by Bertrand Russell that he couldn't convince Wittgenstein
that there was no hippopotamus in the room. Russell didn't go
into any detail, but I suspect that Ludwig W. was trying to
explain the point that a negation cannot be observed.
|
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J (01)
|