Conditional Probability

Conditional Probability

Tags
Statistics
Bayesian
Date
Sep 26, 2019
Description
Introducing conditional probability and independence of events. Bayes' rule comes into play as well.
Slug
mathematical-statistics-conditional-probability
The probability of an event will sometimes depend upon whether we know that other events have occurred. This is easier to explain with an example.

Conditional probability

Suppose we roll two fair six-sided dice. What is the probability that the sum of the two dice is 8?
Using the procedure described before, we can easily get
What if we knew the first roll was 3? That would be another event:
Given the first die is 3, requiring the sum to be 8 is equivalent to requiring the second roll to be 5. So the probability is
Formally speaking, let be the event that sum is 8, and be the event that the first roll is 3. The conditional probability of given is denoted , and is the unconditional probability of . If , then
To understand this, keep in mind that any event can be decomposed into . From the equation we may derive:
Now we can revisit the example above:
The conditional probability can also be generalized to more than two events using the multiplication rule:

Cards example

Suppose we have a deck of 52 cards, and we randomly divided them into 4 piles of 13 cards. Compute the probability that each pile has exactly 1 ace. Note that there are four houses in a deck of cards: Hearts, Diamonds, Clubs and Spades.
We can define events as follows
The desired probability is . We first apply the multiplication rule:
because the ace of Hearts is always going to be in a pile. For , we can calculate the probability of the two cards being in the same pile. As the remaining 12 cards are equally likely drawn from the deck of 51 cards,
Given , the ace of clubs can’t be any of the 24 cards in the two piles with the two aces.
and finally we have
So .

Independence of events

In general, the conditional probability . We say and are independent when . When is independent of , we also have
Definition
Two events E and F are independent if any of the following holds:
Otherwise we say and are dependent. Independence is denoted by .
 
Proposition
If and are independent, then so are and .
To prove this, we need to find .
From this we have
We’ll finish up this part with another example. Suppose we have a circuit with switches in parallel. The probability that the component can work is , . What is the probability that the system functions?
Denote as {the component functions}, , and .

Law of total probability and Bayes’ rule

In the proof of the Proposition, we showed a trick to represent the probability of an event
because and . Now let’s consider a generalization of this. For some positive integer , let the sets be such that and . Then the collection of sets is called a partition of S. and is a partition of with .

Law of total probability

Theorem
Given as a partition of , such that for for any event , we have
The proof is given as follows.
In a driving behavior survey, 60% are sedan drivers, 30% are SUV drivers, and 10% are other drivers. 40%, 65% and 55% of sedan, SUV and other drivers have received a citation within the past 3 years. Suppose each driver can own one type of car, what is the probability that a randomly selected driver received a citation within 3 years?
, , and are the events that a random driver drives a sedan, SUV or other car, respectively. Let be the event that the driver received a citation within 3 years. We have
because , , and form a partition of .
Therefore .

Bayes’ rule

Using the law of total probability, we can derive a simple but very useful result known as the Bayes’ rule. Assume that is a partition of such that for , then
By definition of conditional probability, . By law of total probability, we have . So
If we only have two events, another form of Bayes’ rule is
if and .
With Bayes’ rule, we can solve some very unintuitive conditional probability problems such as the Monty Hall problem. Here we illustrate its power with another example.
A biomarker was developed to detect a certain kind of gene defect. When this test is applied to a person with this gene defect, it has a probability of 0.9 to give a positive result. If this test is applied to a person without the defect, there’s a probability of 0.05 for the biomarker to give a false positive result. We know 1% of the total population have this defect. When we apply this to a random person, what are the probabilities of
  1. the test result is negative,
  1. the person has the defect given the test result is positive, and
  1. the person doesn’t have this defect given the test result is negative?
and are the events of positive and negative results. and are the events of with and without this gene defect. We want to find , and knowing that

Next, we move on to discuss discrete random variables.