ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Ontology, Information Models and the 'Real World': C

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Waclaw Kusnierczyk <Waclaw.Marcin.Kusnierczyk@xxxxxxxxxxx>
Date: Mon, 28 May 2007 15:16:45 +0200
Message-id: <465AD63D.6010800@xxxxxxxxxxx>
KCliffer@xxxxxxx wrote:
> An observation and some thoughts emerging from it:
>  
> Note how much discussion was generated by a simple unintentional error     (01)

not that much of a discussion.  i have actually sent the question to Pat 
behind the scenes first, but i was afraid that it has been filtered away 
into the spambox, so i repeated it here.    (02)

but you're right;  in this case it could be said that the intention was 
obvious (though Pat may have equally well erred in the formal axiom 
rather than in the free text), but in general a mistake like that might 
lead to serious problems.  intelligence, roughly speaking, is about, 
among others, being able to recover from this sort of problems despite 
apparent contradiction.    (03)

> in coding/terminology - an instance of differences in what was meant and 
> perceived by a proposition - in this case a type of instance not 
> included in our recent previous discussion (simple unintentional error 
> in expression). Had the proposer fully reviewed and revised the 
> proposition, it would have sailed smoothly through the discussion 
> (compared to the actual result).    (04)

we should not exaggerate too much...    (05)

>  
> Note - I mean to cast no aspersions - I make plenty of such mistakes and 
> am the first to hope for them to be excused or treated with no negative 
> judgment, and furthermore for them to be corrected by myself or others 
> before any negative effects occur due to them. My purpose here is to 
> point out another kind of example that systems must take into account 
> when dealing with categorizing or handling propositions - their meaning 
> may vary or be uncertain for many reasons, including simple error in 
> composition, as well as differences in perspectives, perceptions, 
> experience, etc.    (06)

perhaps it would be worth to specify what 'the meaning of a proposition 
may vary' means.  do you mean that a proposition may have different 
meanings, or that *the* meaning of a proposition -- whatever it is -- 
may undergo changes?  clearly, a proposition may be true or false on 
different occasions, but the truth or falsity of a proposition is not 
its meaning (it seems).    (07)

> In fact, one of the characteristics that could be considered to be in a 
> well-functioning system is that it can accommodate and correct 
> such errors through its functional processes, without causing 
> "collateral damage" to the fallible human person involved and that 
> person's ability to contribute constructively to the functioning of the 
> system, and without negatively affecting other aspects of the 
> functioning of the system -- as, I might point out, appears to have 
> eventually happened here, as far as I can tell, to this discussion's credit.    (08)

so far, formal systems seem to be more sensitive to such errors than 
humans, with perhaps the exception of logicians.  for the most part, the 
goal of ai has been to make machines think like humans, not the 
opposite.  (ai itself a topic for a separate and long discussion.)    (09)

> The stakes in such functionality depend on the functional purpose of the 
> system - for example if it's a medical system in which lives or health 
> are at stake, the importance of such robustness with respect to errors 
> is obvious. In other kinds of systems, the nature and importance of how 
> they deal with error may not be so obvious. In complex systems, small 
> variations can have surprisingly great and hard-to-predict effects 
> (sometimes represented by the "butterfly effect," in which a butterfly's 
> wing-flapping theoretically could result in a hurricane elsewhere in the 
> world). Stories abound about how small, understandable human errors have 
> had disastrous results in systems that were not robust enough to 
> accommodate and correct them or correct for their effects (including in 
> high-stakes systems).    (010)

i hope there will be no 'asymmetric-reflexive' effect here.    (011)

vQ    (012)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (013)

<Prev in Thread] Current Thread [Next in Thread>