Re: Just one message on random variables

Kevin S. Van Horn (ksvhsoft@xmission.com)
Tue, 23 Jun 1998 23:58:24 -0600

Paul Krause wrote:

> My claim is (provisionally, I don't wish to be dogmatic) that a
> probability is best viewed as a property of the world, and not of a
> person.

Probabilities may be *derived* from properties of the world, given a
particular state of knowledge. But there is still a dependence on that
state of knowledge. To repeat what I said before:

I challenge you to show me a probability that is an actual physical
property, and not just a statement of one's state of knowledge.

> For example, I play a game with a friend involving bets on balls in a
> bag. I'm no good at judgement, and also a rotten cheat. So I dial up my
> friend on my mobile, and whilst he is out of the room dealing with the
> phone I quickly empty the bag of balls and count ten reds in a bag of
> fifty red and black.
> Is that a property of me or the ball bag? (I won the game)

Let's look at this in detail. I assume you are trying to make a bet on
whether a red or black ball will be drawn from the bag. Before peeking
in the bag, you would have to figure the probability of a red ball like
this (I is your initial knowledge, i.e., what you know before peeking
in the bag):

P(red drawn | I) =
(SUM r,b :: P(r red, b black | I) *
P(red drawn | r red, b black, I))

After peeking in the bag, you now have more information (a different state
of knowledge), and so you can base your bet on

P(red drawn | r red, b black, I)

which is a probability conditioned on more information than the original.
If you know nothing about how the ball is to be chosen, then symmetry
dictates the probability assignment

P(red drawn | r red, b black, I) = r/b.

However, suppose that in peaking at the bag you note the positions of the
balls, and you are the one who gets to draw the ball from the bag. If
your memory is good enough and you are careful not to disturb the balls
(or you have precise measurements of all disturbances to the bag, and a
powerful computer to calculate just where the balls will end up), then

P(red drawn | r red, b black, ball positions, you choose, I)

can be very close to 1 or 0, depending on what you choose to do.
So we have arrived at a variety of different probabilities *for the same
bag*, whose physical properties have not changed in the slightest. Only
your state of knowledge has changed, and this accounts for the difference.

> I was thinking about this while tending my collection of orchids.
> What is the probability that I will start to repot my orchids in rockwool
> next year? [...] if we all gave [probabilities],
> these would cluster around the low end of the 0-1 scale.
> This suggests that there is a consistent empirical relation system in
> action [...]

I would say that you are getting similar numbers because you are dealing
with similar states of knowledge. Someone who knew very little about
orchids or rockwool (like me) might give very different numbers. Someone
with very detailed information also might give different numbers. To use
a morbid example, suppose your doctor discovers that you have a fatal
disease that almost always kills within a few months. Then the doctor
will rightfully assign a probability very, very close to zero of your
repotting orchids in rockwool next year.

> So I still claim (provisionally) that (for me) it is helpful to think of
> probability as a measure. It is an "indirect" measure, and as such is
> never (?) possible to measure accurately.

What you measure are actual physical quantities. Probabilities are
*assigned* based on the results of those measurements, observations,
experience, etc. The question of how to turn various kinds of information
into a probability distribution is an ongoing research question, but some
progress has been made. Considerations of symmetry (in your state of
knowledge) can be used to argue for a particular probability assignment.
In addition, the principle of maximum entropy can be used to turn the
results of certain kinds of measurements into probability distributions.

> You obtain a probability, then, by:
> [...]
> Repeating the situation you are interested in

This is a rather impoverished view of probability theory, that unnecessarily
limits its application to situations where such repetition is possible.

> if several people can have significantly different probabilities for the
> same situation, then I just do not know who to believe.

The probabilities you get from experts 1 and 2 are

P(A | expert 1's knowedge and experience)
and
P(A | expert 2's knowledge and experience)

These are distinct entities, based on different conditioning information;
you can only expect them to be similar if the conditioning information is
similar (they have similar knowledge and experience). What *you* have
to deal with is

P(A | your own knowledge and experience),

and the conditioning information here is different from either of the
above two. However, your knowledge includes the probability assignments
the experts claim represent their best judgment. (Of course, they could
be lying...) How to combine this information to produce your own
probability assignment is a question I don't think anyone has a
satisfactory answer to yet. I imagine, though, that important
considerations would be the reliability of the experts,
the similarity in their backgrounds, etc.

Kevin S. Van Horn
ksvhsoft@xmission.com