Probability/Conditional Probability

From Wikibooks, open books for an open world
< Probability
Jump to: navigation, search

In some situations we need a new kind of probability.

Examples[edit]

We shall throw two dice: the probability of getting double six is 1/36. We have thrown the dice, but they are still under the cup: the probability of double six still is considered to be 1/36. Now we have someone else look under the cup at the result, and they tell us that at least one of the dice is a 6. What will the probability be of double six? Although we ask for the "probability" of double six, it's not the original probability we're talking about. It's a new kind of probability of the event "double six" under the condition that we know that another event has occurred, i.e. the event that one of the dice shows six. We call it "conditional probability". It indicates how likely the event "double six" is amongst all events in which (at least) one of the dice shows six. In 11 of the 36 possibilities at least one of the dice shows six. Thus, the probability of one of the dice showing six is 11/36 and the probability of double six is 1/36. Hence, the conditional probability of double six, given one die shows six is (1/36)/(11/36) = 1/11. We write this conditional probability as:

P( "double six" | "at least one six" ),

using the same letter P as for the original probability, and separating the event of which the conditional probability is indicated from the event that states the condition by a vertical line ("|").

As another example we pick at random an inhabitant of the UK. The probability of the chosen person to be a woman (W) is 1/2, it's nothing else than the fraction women amongst the British. What if we are informed that the chosen person is working in a hospital (H)? It now comes down to the fraction of women amongst hospital staff, let's say 0.7. So we have:

P(W) = 0.5

and

P(W|H) = 0.7.

From these examples we learn:

Definition[edit]

The conditional probability of an event A, given the (occurrence of) the event B, is defined as:

P(A|B) = \frac{P(A \cap B)}{P(B)},

provided P(B) > 0. In the case where P(B) = 0, the conditional probability of A given B is meaningless, but for completeness we define P(A|B) = 0 for P(B) = 0


It follows from this formula that P(A|B)P(B) = P(A and B).

Independent events Intuitively, we would like two events, A and B, to be independent if P(A|B) = P(A), that is, if A is as likely to occur whether or not B has occurred. In this case, the occurrence of A is independent of the occurrence of B.


However, P(A|B) = P(A) \iff P(A and B)/P(B) = P(A) \iff P(A and B) = P(A)P(B).


Therefore, A is independent from B \iff B is independent from A \iff P(A and B) = P(A)P(B).