How did humans invent probability distributions

content
" Preliminary remarks
“Bernoulli experiments
»The derivation of the binomial distribution
" The formula
“Examples
»Expectation and variance
»Visualization of the binomial distribution

Preliminary remarks

The Bernoulli experiment is a fundamental consideration for a number of experimental outcomes. If a Bernoulli experiment is available, we can use the binomial distribution to solve actually complicated, detailed calculations with a short formula.

Bernoulli experiments

A Bernoulli experiment has only two test outcomes. We often call one of them "success" or "hit", it occurs with a probability \ (p \). We call the counter-event "rivet" or "miss" and has the probability \ (1-p \), this is abbreviated in many works with \ (q = 1-p \). If this event is repeated, the hit probability remains constant \ (p \). The individual repetitions of the Bernoulli experiment are therefore called independent.

The die roll has six different outcomes. However, if we reduce ourselves to, for example, the events \ (A: = \) "a 6 is rolled" and \ (A ^ c: = \) "no 6 is rolled", we get a Bernoullie experiment, because \ (P (A ) = \ frac {1} {6} = p \) remains constant, the cube "has no memory".

The derivation of the binomial distribution

Let us consider the four-time roll of the dice and denote by \ (X \) the number of sixes that are rolled. Then of course \ (X \ in \ {0, \ dots, 4 \} \) and we can make a (very large) tree diagram.

If we want to calculate \ (P (X = 0) \), we get a branch and with the help of the multiplication theorem
\ begin {align *}
P (X = 0) = (\ frac {5} {6}) ^ 4
\ end {align *}

Next we want to calculate \ (P (X = 1) \) and a look at the tree shows us four branches

with associated probabilities
\ begin {align *}
P (X = 1) & = \ frac {1} {6} \ cdot (\ frac {5} {6}) ^ 3+ \ frac {5} {6} \ cdot \ frac {1} {6} ( \ frac {5} {6}) ^ 2 + \
& + (\ frac {5} {6}) ^ 2 \ cdot \ frac {1} {6} \ cdot \ frac {5} {6} + (\ frac {5} {6}) ^ 2 \ cdot \ frac {5} {6} \ cdot \ frac {1} {6}.
\ end {align *}
What do we notice? All four summands consist of the same four factors, once \ (\ frac {1} {6} = p \) for a 6 and three times \ (\ frac {5} {6} = (1-p) \) for three times none 6. But we can write now
\ begin {align *}
P (X = 1) = 4 \ frac {1} {6} \ cdot (\ frac {5} {6}) ^ 3.
\ end {align *}

The same consideration leads us with \ (P (X = 2) \) to the realization that every summand consists of two times \ (\ frac {1} {6} \) and two times \ (\ frac {5} {6 }\) consists. The question now arises, are there four branches or more? If we count briefly, we come to six branches. Can we now find a formula for the number of branches? The answer is of course yes and we use combinatorics for this. How many anagrams are there with the "letters" \ (6,6,6 ^ c \) and \ (6 ^ c \) (twice 6 and twice not 6)? We have already solved this problem in the chapter on combinatorics, there are \ (\ binom {4} {2} \) possibilities. From this we can conclude that
\ begin {align *}
P (X = 2) = \ binom {4} {2} (\ frac {1} {6}) ^ 2 \ cdot (\ frac {5} {6}) ^ 2
\ end {align *}
applies.

For \ (P (X = 3) \) we then infer analogously to \ (\ binom {4} {3} = 4 \) branches consisting of three \ (\ frac {1} {6} \) and once \ ( \ frac {5} {6} \). It follows
\ begin {align *}
P (X = 3) = \ binom {4} {3} (\ frac {1} {6}) ^ 3 \ cdot \ frac {5} {6}.
\ end {align *}

The binomial distribution can therefore be used in a tree diagram with two test outputs (per level) and constant \ (p \), so many examples (coin toss, dice toss) often have the binomial distribution as a solution in addition to the tree diagram. However, it is so important and so common that it has taken its own place in probability theory. It is a discrete probability distribution.

The binomial distribution as an urn model corresponds to repeated pulling out of an urn without considering the order and with putting the balls back (so that \ (p \) remains constant). It is also important that there are only two test results, "hits" and "rivets". Such an experiment is called dichotomous.

The formula

If one knows the number of attempts \ (n \) and the hit probability \ (p \) from a binomial distribution, the probability of getting \ (k \) hits can be easily calculated using the formula
\ begin {align *}
P (X = k) = \ binom {n} {k} \ cdot p ^ k \ cdot (1-p) ^ {n-k}
\ end {align *}
to calculate. Often the formula is abbreviated as \ (q = 1-p \), but the greater sense behind it does not become apparent.

Examples

Multiple choice: A student has not studied on an exam. There are ten questions, each with four possible answers, exactly one of which is correct. He ticks everything completely at random. What is the probability that he will solve at least 80 percent of the tasks correctly?

solution

The student must therefore answer at least \ (10 ​​\ cdot 0.8 = 8 \) questions of \ (n = 10 \) correctly. He has four equally probable answer options per question, Laplace now says that the probability of ticking correctly is \ (\ frac {1} {4} = p \). It follows
\ begin {align *}
P (X \ geq 8) & = P (X = 8) + P (X = 9) + P (X = 10) = \
& = \ binom {10} {8} \ cdot (\ frac {1} {4}) ^ 8 \ cdot (1- \ frac {1} {4}) ^ {10-8} + \ binom {10} {9} \ cdot (\ frac {1} {4}) ^ 9 \ cdot (\ frac {3} {4}) ^ 1 + \ binom {10} {10} \ cdot (\ frac {1} {4 }) ^ {10} \ cdot (1- \ frac {1} {4}) ^ 0 = \
& \ approx 0.0004.
\ end {align *}


So the student should learn better.

The counter-event: the example is the same as before. The question now is, what is the probability of solving at least two examples correctly?

solution

Analogously, \ (n = 10 \) and \ (p = \ frac {1} {4} \) apply. Unfortunately there is
\ begin {align *}
P (X \ geq 2) & = P (X = 2) + P (X = 3) + \ cdots + P (X = 10)
\ end {align *}
from a very large number of cases (nine). Even if we can calculate each of these cases with our formula, it is laborious. We are faster with the counter-event
\ begin {align *}
P (X \ geq 2) & = 1-P (X <2) = 1-P (X \ leq 1) = \
& 1- (P (X = 0) + P (X = 1)) = \
& 1- \ binom {10} {0} (\ frac {1} {4}) ^ 0 \ cdot (\ frac {3} {4}) ^ {10} - \ binom {10} {1} (\ frac {1} {4}) ^ 1 \ cdot (\ frac {3} {4}) ^ {9} = \
& =0,756.
\ end {align *}

The three-minimum example: This example has been given a nickname because of the often similar task. Ultimately, however, it is a task of algebra in "binomial guise", how many questions must be asked at least so that our student has at least one correct question with a probability of at least 95 percent?

solution

Translated into the language of mathematics, the question says from which \ (n \) applies
\ begin {align *}
P (X \ geq 1)> 0.95?
\ end {align *}
We reshape a little and maintain
\ begin {align *}
& P (X \ geq 1) \ geq 0.95 \
& 1-P (X = 0) \ geq 0.95 \
& 0.05 \ geq P (X = 0) \
& 0.05 \ geq \ binom {10} {0} (\ frac {1} {4}) ^ 0 \ cdot (\ frac {3} {4}) ^ n.
\ end {align *}


But now \ ((\ frac {1} {4}) ^ 0 = \ binom {10} {0} = 1 \) and the inequality remains
\ begin {align *}
& 0.05 \ geq (\ frac {3} {4}) ^ n.
\ end {align *}
We take a logarithm and use it to solve the equation
\ begin {align *}
& \ log (0.05) \ geq n \ cdot \ log (\ frac {3} {4}) \
& \ frac {\ log (0.05)} {\ log (\ frac {3} {4})} \ leq n \
& 10.4133 \ leq n
\ end {align *}


So the exam has to contain at least eleven questions. Of course, the statement can also appear more skillful and detached from three "at least".

Expectation and variance

Expected value \ (E (x) \ mu \) and variance \ (Var (X) = \ sigma ^ 2 \) of a binomial distribution can be easily calculated using
\ begin {align *}
\ mu = n \ cdot p, \ qquad \ sigma ^ 2 = n \ cdot p \ cdot (1-p).
\ end {align *}

Visualization of the binomial distribution

The binomial distribution is the perfect example when you visualize a discrete random variable using a histogram. You can clearly see the expected value as the highest bar and we can also see that all diagrams are similar in form.

Open Geogebra file