Re: ?Value of information without utilities?

Peter Szolovits (psz@mit.edu)
Tue, 25 May 1999 16:19:54 -0400

At 11:34 AM 5/25/99 +0800, Dr. Lian Wen Zhang wrote:
>...
>But this method seems to be flawed. Suppose the diagnosis is
>between D1 and D2 and, at the beginng, the belief is inclined to D1.
>Now consider two possible questions Q1 and Q2, whose answers are
>likely to be in favor of D1 and D2 respectively. Then the
>entropy method would probably choose Q1 over Q2.

Q2 will be chosen just in case its expected reduction in entropy of the
probability distribution over D1 and D2 is greater than that of Q1. To do so,
Q2 will need to lead to an expectation of a more skewed distribution (toward
D2) than Q1 does toward D1. Because you postulate that D1 is initially more
likely, this means that the likelihood ratio for Q2 will have to be larger than
for Q1. But this is just right, if what you are trying to do is to converge
most rapidly to a definitive answer.

Historically, practical programs emply mixed strategies. For example, Gorry's
sequential Bayesian diagnostic programs (e.g., Gorry, G. A. and G. O. Barnett
(1968). Sequential Diagnosis by Computer” Journal of the American Medical
Association 205(12): 849-854) include special cases to consider diseases that
are unlikely but important not to overlook. In your example, suppose that D2
is unlikely, but easily treatable if detected and fatal otherwise. Then, there
might be a rule that if P(D2) exceeds some (low) threshold, then Q2 must be
asked even if the entropy method would only ask Q1.

--Pete Szolovits