Q2 will be chosen just in case its expected reduction in entropy of the
probability distribution over D1 and D2 is greater than that of Q1. To do so,
Q2 will need to lead to an expectation of a more skewed distribution (toward
D2) than Q1 does toward D1. Because you postulate that D1 is initially more
likely, this means that the likelihood ratio for Q2 will have to be larger than
for Q1. But this is just right, if what you are trying to do is to converge
most rapidly to a definitive answer.
Historically, practical programs emply mixed strategies. For example, Gorry's
sequential Bayesian diagnostic programs (e.g., Gorry, G. A. and G. O. Barnett
(1968). Sequential Diagnosis by Computer” Journal of the American Medical
Association 205(12): 849-854) include special cases to consider diseases that
are unlikely but important not to overlook. In your example, suppose that D2
is unlikely, but easily treatable if detected and fatal otherwise. Then, there
might be a rule that if P(D2) exceeds some (low) threshold, then Q2 must be
asked even if the entropy method would only ask Q1.
--Pete Szolovits