[Top] [All Lists]

Re: [ontolog-forum] Model or Reality

To: paola.dimaio@xxxxxxxxx
Cc: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Ed Barkmeyer <edbark@xxxxxxxx>
Date: Fri, 17 Aug 2007 15:23:34 -0400
Message-id: <46C5F5B6.6040504@xxxxxxxx>
Paola,    (01)

you wrote:    (02)

>  I think
> what I am trying to say here is that we have to model uncerntainty.    (03)

I can agree with that, but I am not sure what it means.
I would have said we must try to make clear what assumptions we 
are making.  But there is always the problem that Pat Hayes calls 
the Horatio Principle:  There are more things in heaven and earth 
than are dreamt of in your philosophy.  For example, the person 
who makes the model may equate "camel" with "dromedary camel" in 
his head, because he is unaware that the world contains camels 
with two humps.  So the idea of a camel with 2 humps and the idea 
of a camel with 5 legs are equally meaningless to him.    (04)

And this is the real problem -- you can characterize the 
limitations of your model, when you know you are deliberately 
limiting your model, or you know that your model is inaccurate. 
But you cannot in any way capture in a model what you don't know 
that you don't know.    (05)

>> But the surprise failures are those that involve a factor that was not
>> considered at all, and not commonly considered in the trade. 
>> How do you build> an "X factor" defense for that?
> Simply like this:  certainties+X = do not assume that because the bridge is 
> built according to a sound model it will stay up, because there are factors 
> out there that we dont yet know. Then stick that somewhere prominent in the 
> engineering book.    (06)

My experience of engineers is that there is a prevalent hubris: 
many engineers seem unable to accept that there may be something 
relevant to their work that they don't know, and cannot learn in 
an afternoon.  But saying in the engineering handbook that there 
may be something relevant to a design that you don't know serves 
only to encourage intellectual humility.  It doesn't provide much 
in the line of guidance.    (07)

> What I am trying to stress here, is that some science has the presumption 
> to be 'exact' (haha} simply because it ignores the 'unknown factor' by not 
> including uncertainty in the model,      (08)

I think you have a very broad definition of "science".  "Good 
science" is very careful to distinguish the limits of theory, to 
the extent that those limits are understood, and to characterize 
the domains in which a theory has been "validated" and the 
considerations that might invalidate it in somewhat larger 
domains.  Overstating the applicability of a theory is not 
"science".  *Hypothesizing* applicability of a theory beyond the 
known domain is what Jon Awbrey correctly described as a first 
step toward gaining new knowledge.    (09)

(In the mid-1960s one of my former supervisors published a paper 
on some phenomenon in electron imaging that his group had 
measured.  And at the end of the paper, he mentioned that they 
had also observed a phenomenon that they had neither measured nor 
characterized, subject of future work.  That was "good science". 
  In 1968, a group at IBM published a characterization of that 
phenomenon, for which they later got the Nobel Prize in Physics.)    (010)

> I think we all agree that 'what is to be known' is infinite, while our 
> 'ability to know' is finite, al    (011)

Sure.    (012)

> To work towards perfect knowledge, okay, improved knowledge would be enough 
> we have to stop relying solely on what we can know, and we have to learn how 
> to study what we cant know for sure, cause it seems to cause so much trouble, 
> I accept if you find this cognitive method a little disconcerting at first. 
> It is also called 'possibility theory' 
> (http://en.wikipedia.org/wiki/Possibility_theory    (013)

Please note that Zadeh's work assumes that one can characterize 
sets of possible things.  It does not deal with unknown concepts. 
  It deals with unknown 'things' that are instances of known or 
at least postulated concepts.    (014)

It may be that a sneeze in China affects the successful return of 
the spacecraft Discovery, but for possibility theory, I have to 
postulate the possibility of a relationship, and then estimate 
it.  I cannot use possibility theory to describe the effects of 
unidentified cosmic events, or unidentified Chinese events, 
unless I create a class called "unidentified cosmic events" and 
estimate the possibilities of its unidentified members.    (015)

> There is some interesting work under 'uncertainty and fuzzy modeling n civil 
> engineering' approach somewhere. I am sure some will find fuzzyness 
> objectionable in hard science,    (016)

In some aspects, scientists may behave as if there were none, but 
that is a consequence of complete personal acceptance of a 
theory.  If asked to make an on-the-record scientific judgement 
about that theory, those same scientists will admit that it is 
not certainty.  It only explains everything we have been able to 
observe.    (017)

Fuzziness is an unavoidable element of many aspects of hard 
science.  We know about the inaccuracies of measurements and 
observations, and we understand the phenomenon of uncontrolled 
variables.  So we require replication of experiments in different 
environments, and we perform statistical analyses of results, in 
order to achieve some measurable degree of certainty.    (018)

> Our Models are based on simplified assumptions, such as:
> "For simplicity, assume that the universe of discourse Ω is a finite set, 
> and assume that all subsets are measurable."
> Too bad that the universe of discourse is infinite, and only a few of the 
> subsets are measurable - thats the real world    (019)

Indeed.  And that is why the mathematician begins by stating an 
assumption that was made, and that may be necessary to all of the 
results that follow.  The object was to state what is known, not 
to make the unsupported assertion that the results apply to the 
useful cases that don't match the assumptions.    (020)

Sometime around 1960, a Russian mathematician noted that certain 
methods for producing numerical integrations of many important 
engineering equations seemed to work well for many cases in which 
the boundaries of the geometric objects had corners, even though 
they had only been proved to work for objects whose boundaries 
were smooth curves.  In 1962, he published a paper defining a 
case-by-case mathematics in which one could construct proofs for 
the correctness of the results for ugly boundaries.  In the next 
10 years, 100+ mathematicians got their Ph.Ds applying the 
Sobolev method to different engineering problems.    (021)

The engineers got valid results doing things that the 
mathematicians would not in 1960 have agreed were at all 
"credible".  But the known-but-unresolved factor was analyzed, 
and new knowledge was developed.  That is how science works.    (022)

> You are assuming that you can place certainty upon certain factors, while 
> there is some uncertainty lurkin beneath your assmption of certainty that I 
> think we should put in our equations    (023)

Mathematics is a special case.  It is a pure system in which 
there can exist a kind of certainty that has only occasional 
relevance to the real world.  Physical Science is never 
"certain".  Scientists DO identify the uncertainties in their 
observations and measurements.  But when they assume the validity 
of a theory as a foundation for taking the next step, they 
sometimes forget that there were unresolved issues at the edges 
of the foundational theory.  And then someone with better 
measurement technology makes the critical observation and asks 
the right question.  Quantum theory was born of the observation 
that, although matter and energy, as a continuum, are conserved 
in an atomic system, "spin" is not.    (024)

>>> The stability factor in a model can only be a temporary, and must be
>>> balanced with the 'uncertainty' factors at application stage
>> We cannot accommodate factors we know nothing about, or do not understand.
> I think we can, But then again, I am not a civil engineer    (025)

Neither am I.    (026)

We can agree to provide a list of things that we are aware may be 
related but haven't analyzed.  We can agree to state the limits 
on what we were able to observe.  But I don't agree that we can 
say anything meaningful about things we haven't thought of and 
can't characterize at all.  And sometimes one of those will be 
why the bridge falls down.    (027)

The Pilgrims in New England in 1620 built a colony at the same 
latitude as Bilbao, Spain.  And the first winter nearly killed 
them all, because in Bilbao the temperature does not reach -20 
and stay below freezing for weeks, but in Plymouth it did.  Their 
temperature-to-latitude model was based entirely on observations 
on the European side of the gyre.  How could they have known 
about the X-factor?    (028)

-Ed    (029)

Edward J. Barkmeyer                        Email: edbark@xxxxxxxx
National Institute of Standards & Technology
Manufacturing Systems Integration Division
100 Bureau Drive, Stop 8263                Tel: +1 301-975-3528
Gaithersburg, MD 20899-8263                FAX: +1 301-975-4694    (030)

"The opinions expressed above do not reflect consensus of NIST,
  and have not been reviewed by any Government authority."    (031)

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (032)

<Prev in Thread] Current Thread [Next in Thread>