Bayes rule exercises: part 1

1. Green and Blue Cabs

A cab was involved in a hit-and-run accident at night. Two cab companies, the Green and the Blue, operate in the city. You are given the following data:

  1. 85% of the cabs in the city are Green and 15% are Blue.
  2. A witness identified the cab as Blue. The court tested the reliability of the witness under the circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time.

What is the probability that the cab involved in the accident was Blue than Green?

Most of us will give the solution as 80%. But it is incorrect. Let us try to solve the problem step by step.

85% of the Cabs are Green and 15% are Blue

If there are 100 cabs then 85 will be Green and 15 will be Blue.

statistics-for-beginners-bayes-rule-1-task-1-1

Witness is correct 80% of the times

The witness identified the color of the cab to be blue. If the witness is correct then 80% of the times it will be the blue color cab. It also means that 20% of the times it will be the incorrect green color cab.

statistics-for-beginners-bayes-rule-1-task-1-2

It is easier for the brain to deal with absolute numbers. Hence to solve this problem, I am assuming there are 100 cabs. 85 are green and 15 are blue. Witness is correct 80% of the times in identifying the cab as blue.

Total no of blue cabs identified correctly = 15 * 0.8 = 12

Witness is incorrect 20% of the times identifying the cab as green.

Total no of green cabs identified incorrectly = 85 * 0.2 = 17

The total cabs identified by the witness will be 12 + 17 = 29

Hence the probability of identifying blue cab correctly is = 12/29 = 41.3%

Source: https://janav.wordpress.com/2013/06/06/bayes-theorem/

2. Cancer Test

In a given population 1% of the people might have cancer. Tests can be taken to identify cancer. Following are the details about the test

  1. Test will be positive 90% of the time if someone has cancer.
  2. Test will be negative 90% of the time if someone does not have cancer.

If you take the cancer test and it comes out as positive. What is the probability of having cancer? Once again the most common answer is 90%. But it is incorrect. Let us solve the problem step by step.

Finding all the Inputs

To simplify the problem assume there are 1,000 people.

  • P(Cancer) = 1%
  • P(No Cancer) = 99% (100% – 1%)
  • P(Positive Test and Cancer) = 90%
  • P(Negative Test and Cancer) = 10% (100% – 90%)
  • P(Negative test without Cancer) = 90%
  • P(Positive test without Cancer) = 10% (100% – 90%)

Positive Test and Cancer

1% of the people have cancer = 1000 * 0.01 = 10
90% of them will test positive = 10 * 0.9 = 9    - A

Positive Test and No Cancer

99% of the people do not have cancer = 1000 * 0.99 = 990
10% of them will test positive = 990 * 0.1 = 99  - B

Total Positive Tests

Adding A and B from above = 99 + 9 = 108 - C

Probability of Cancer with positive test

From C Total Positive Tests = 108
From A positive test and cancer = 9
Probability of Cancer with positive test = 9/108 = 8.33%

Why is probability of cancer with positive test is only 8.33%

Once again base rate rescued us. In our example out of 108 people identified as positive, 99 of them do not have cancer. The test identifies lots of non cancer people as positive. As given in B above 99 of them were identified to have cancer, even though they do not have. Thus even though the test came out to be positive the probability of having cancer is only 8.33%.

Source: https://janav.wordpress.com/2013/06/06/bayes-theorem/

3. The cookie problem

Suppose there are two bowls of cookies. Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. Bowl 2 contains 20 of each.

Now suppose you choose one of the bowls at random and, without looking, select a cookie at random. The cookie is vanilla. What is the probability that it came from Bowl 1?

This is a conditional probability; we want p(Bowl 1 | vanilla), but it is not obvious how to compute it. If I asked a different question—the probability of a vanilla cookie given Bowl 1—it would be easy:

p(vanilla | Bowl 1) = 3/4

Source: http://www.greenteapress.com/thinkbayes/html/thinkbayes002.html

4. The M&M problem

M&M’s are small candy-coated chocolates that come in a variety of colors. Mars, Inc., which makes M&M’s, changes the mixture of colors from time to time.

In 1995, they introduced blue M&M’s. Before then, the color mix in a bag of plain M&M’s was 30% Brown, 20% Yellow, 20% Red, 10% Green, 10% Orange, 10% Tan. Afterward it was 24% Blue , 20% Green, 16% Orange, 14% Yellow, 13% Red, 13% Brown.

Suppose a friend of mine has two bags of M&M’s, and he tells me that one is from 1994 and one from 1996. He won’t tell me which is which, but he gives me one M&M from each bag. One is yellow and one is green. What is the probability that the yellow one came from the 1994 bag?

This problem is similar to the cookie problem, with the twist that I draw one sample from each bowl/bag. This problem also gives me a chance to demonstrate the table method, which is useful for solving problems like this on paper. In the next chapter we will solve them computationally.

The first step is to enumerate the hypotheses. The bag the yellow M&M came from I’ll call Bag 1; I’ll call the other Bag 2. So the hypotheses are:

  • A: Bag 1 is from 1994, which implies that Bag 2 is from 1996.
  • B: Bag 1 is from 1996 and Bag 2 from 1994.

Now we construct a table with a row for each hypothesis and a column for each term in Bayes’s theorem:

Prior Likelihood Posterior
p(H) p(D|H) p(H) p(D|H) p(H|D)
A 1/2 (20)(20) 200 20/27
B 1/2 (14)(10) 70 7/27

The first column has the priors. Based on the statement of the problem, it is reasonable to choose p(A) = p(B) = 1/2.

The second column has the likelihoods, which follow from the information in the problem. For example, if A is true, the yellow M&M came from the 1994 bag with probability 20%, and the green came from the 1996 bag with probability 20%. If B is true, the yellow M&M came from the 1996 bag with probability 14%, and the green came from the 1994 bag with probability 10%. Because the selections are independent, we get the conjoint probability by multiplying.

The third column is just the product of the previous two. The sum of this column, 270, is the normalizing constant. To get the last column, which contains the posteriors, we divide the third column by the normalizing constant.

That’s it. Simple, right?

Well, you might be bothered by one detail. I write p(D|H) in terms of percentages, not probabilities, which means it is off by a factor of 10,000. But that cancels out when we divide through by the normalizing constant, so it doesn’t affect the result.

When the set of hypotheses is mutually exclusive and collectively exhaustive, you can multiply the likelihoods by any factor, if it is convenient, as long as you apply the same factor to the entire column.

Source: http://www.greenteapress.com/thinkbayes/html/thinkbayes002.html

5. The Monty Hall problem

The Monty Hall problem might be the most contentious question in the history of probability. The scenario is simple, but the correct answer is so counterintuitive that many people just can’t accept it, and many smart people have embarrassed themselves not just by getting it wrong but by arguing the wrong side, aggressively, in public.

Monty Hall was the original host of the game show Let’s Make a Deal. The Monty Hall problem is based on one of the regular games on the show. If you are on the show, here’s what happens:

  • Monty shows you three closed doors and tells you that there is a prize behind each door: one prize is a car, the other two are less valuable prizes like peanut butter and fake finger nails. The prizes are arranged at random.
  • The object of the game is to guess which door has the car. If you guess right, you get to keep the car.
  • You pick a door, which we will call Door A. We’ll call the other doors B and C.
  • Before opening the door you chose, Monty increases the suspense by opening either Door B or C, whichever does not have the car. (If the car is actually behind Door A, Monty can safely open B or C, so he chooses one at random.)
  • Then Monty offers you the option to stick with your original choice or switch to the one remaining unopened door.

The question is, should you “stick” or “switch” or does it make no difference?

Most people have the strong intuition that it makes no difference. There are two doors left, they reason, so the chance that the car is behind Door A is 50%.

But that is wrong. In fact, the chance of winning if you stick with Door A is only 1/3; if you switch, your chances are 2/3.

By applying Bayes’s theorem, we can break this problem into simple pieces, and maybe convince ourselves that the correct answer is, in fact, correct.

To start, we should make a careful statement of the data. In this case D consists of two parts: Monty chooses Door B and there is no car there.

Next we define three hypotheses: A, B, and C represent the hypothesis that the car is behind Door A, Door B, or Door C. Again, let’s apply the table method:

Prior Likelihood Posterior
p(H) p(D|H) p(H) p(D|H) p(H|D)
A 1/3 1/2 1/6 1/3
B 1/3 0 0 0
C 1/3 1 1/3 2/3

Filling in the priors is easy because we are told that the prizes are arranged at random, which suggests that the car is equally likely to be behind any door.

Figuring out the likelihoods takes some thought, but with reasonable care we can be confident that we have it right:

  • If the car is actually behind A, Monty could safely open Doors B or C. So the probability that he chooses B is 1/2. And since the car is actually behind A, the probability that the car is not behind B is 1.
  • If the car is actually behind B, Monty has to open door C, so the probability that he opens door B is 0.
  • Finally, if the car is behind Door C, Monty opens B with probability 1 and finds no car there with probability 1.

Now the hard part is over; the rest is just arithmetic. The sum of the third column is 1/2. Dividing through yields p(A|D) = 1/3 and p(C|D) = 2/3. So you are better off switching.

There are many variations of the Monty Hall problem. One of the strengths of the Bayesian approach is that it generalizes to handle these variations.

For example, suppose that Monty always chooses B if he can, and only chooses C if he has to (because the car is behind B). In that case the revised table is:

Prior Likelihood Posterior
p(H) p(D|H) p(H) p(D|H) p(H|D)
A 1/3 1 1/3 1/2
B 1/3 0 0 0
C 1/3 1 1/3 1/2

The only change is p(D|A). If the car is behind A, Monty can choose to open B or C. But in this variation he always chooses B, so p(D|A) = 1.

As a result, the likelihoods are the same for A and C, and the posteriors are the same: p(A|D) = p(C|D) = 1/2. In this case, the fact that Monty chose B reveals no information about the location of the car, so it doesn’t matter whether the contestant sticks or switches.

On the other hand, if he had opened C, we would know p(B|D) = 1.

I included the Monty Hall problem in this chapter because I think it is fun, and because Bayes’s theorem makes the complexity of the problem a little more manageable. But it is not a typical use of Bayes’s theorem, so if you found it confusing, don’t worry!

Source: http://www.greenteapress.com/thinkbayes/html/thinkbayes002.html

If you like my articles,