Bob, (01)
>Do you see value in considering Belief Networks, especially Ken
>Baclawski's discussions last Jan. 25th, in this context? (02)
Ah, you sure do know how to put me on my soapbox! :-) (03)
In most practical problems, any set of axioms that is faithful to the
domain leaves the truth-values of many important propositions
unspecified. But it is usually the case that we do have some
information germane to their truth-values. (04)
As a simple example, consider a medical condition C, and a symptom S
that tends to be indicative of C, and a diagnostic test T for S.
Suppose a patient walks into a doctor's office complaining of S.
What can we say? What if the doctor administers T and it comes up
positive? Now what can we say? (05)
Virtually nothing, if all we can do is assert axioms of classical
logic. Our medical ontology will say that C is a medical condition,
and S is a symptom of C, and T is a test for C. But it won't say
anything else. We can't prove that she does or does not have the
condition. (06)
But we usually know a great deal more! Doctors bet their careers and
patients bet their lives on our being able to say more. What we know
is statistical. (07)
Going back to our example, suppose we know:
The base rate: 1 in 1000 people have C
The sensitivity of S 80% of people with C complain of S
The specificity of S 2% of people without C complain of S
The sensitivity of T 98% of people with C test positive on T
The specificity of T 1% of people with C test negative on T (08)
If we have no information about a patient except that she comes from
a population with this base rate, we would assign a 1/1000
probability that she has C. That is, Pr(C) = 1/1000. When we learn
she is complaining of S, we can calculate a new probability, the
probability that she has C given that she complains of S. We write
this new probability as Pr(C|S) (the probability of C given S). We
calculate it as follows (this is the well-known Bayes Rule): (09)
Pr(C|S) = Pr(S|C)Pr(C)
-------------------------------------
Pr(S|C)Pr(C) + Pr(S|not-C)Pr(not-C) (010)
= 0.8x0.001/(0.8x0.001+0.02x0.999) (011)
If you do the calculation, you will find that the probability has
increased to about 4%. This is still a small number, but it is more
than an order of magnitude higher than it was previously. It is
still unlikely that the patient has the condition, but a prudent
physician will be concerned enough to order a diagnostic test. (012)
If we give the patient T and she tests positive, we can do a similar
calculation to find the conditional probability that the patient has
C given that she complains of S and tests positive on T: (013)
Pr(C|S,T) = Pr(T|C,S)Pr(C|S)
---------------------------------------------
Pr(T|C,S)Pr(C|S) + Pr(T|not-C,S)Pr(not-C|S) (014)
The information I gave you won't answer this question. I gave you
P(T|C) and P(T|not-C), but we need P(T|C,S) and P(T|C,not-S). To
allow me to answer the question, I will make an additional reasonable
assumption -- that the probabilities for the result of T do not
depend on whether or not the patient complains of S. That is, S and
T are conditionally independent given C. (015)
This is where belief networks come in. They provide a simple
graphical language for representing this kind of conditional
dependence and independence information, for computing the resulting
probability values, and for updating the probabilities given
evidence. (016)
The belief network for this problem is (in cumbersome ascii): (017)
C
/ \
/ \
v v
S T (018)
That is, the probabilities of S and T depend on C, but given C, they
don't depend on each other (because there's no arrow from S to T).
With these independence assumptions, I can do a similar calculation
to the above, and obtain: (019)
Pr(C|S,T) = 0.98x0.04/(0.98x0.039+0.01x0.96) (020)
In other words, given that the patient complains of S and has a
positive result on T, there is now about an 80% chance that the
patient has C. (021)
Now that's a useful result! And we could not get it from a logical reasoner. (022)
This kind of reasoning is performed all the time in any number of
application domains in which we have useful knowledge that falls
short of logical proof. In fact, logical proof is more the exception
than the rule except in domains where we have explicitly legislated
the rules -- and even then, there are often exceptions! (023)
I think ontologies need to be able to represent probabilistic
relationships like these. For example, in this problem, I have used
several concepts that belong in a medical ontology: (024)
- Base rate of a condition
- Evidence for a condition
- Symptom of a condition (a subtype of evidence)
- Test for a condition (a subtype of evidence)
- Sensitivity of a type of evidence
- Specificity of a type of evidence
- Conditional independence of symptoms/tests given a condition (025)
Probabilistic ontologies (see http://www.pr-owl.org) can represent
these kinds of things. (026)
Going back to our discussion about model theory, a probabilistic
ontology assigns probabilities to sets of Tarski interpretations in a
logically consistent (the Bayesians call it "coherent") way. That
is, when our logical axioms (ontology plus problem-specific knowledge
+ evidence obtained to date) leave truth-values unspecified, we can
provide likelihood information that enables us to assign
probabilities for hypotheses of interest, and we can update those
probabilities as more evidence (new axioms) becomes available. (027)
Kathy (028)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (029)
|