Please spare me a coin, or two.
Can you count the times you have read about or experienced an experiment to demonstrate the process of random events using multiple flips of a coin, tossing a die or the probabilities of drawing a certain card from a deck of 52?
This essay will add a few additional notes on flipping one then two coins. That’s not a major leap in complexity but hopefully it will bolster my preface for believing that “1/2” “, or 0.50, is the underlying number that I will soon try to show is very important in our natural world. There is a certain beauty about this simple fraction. More on that soon.
When tossing, or flipping, a single, “fair” coin many times we anticipate that 0.50 (50%) of the outcomes will be Heads (H) and 50% Tails (T), barring no rare events such as the coin standing on edge. Each toss is considered a Bernoulli Trial , after a Swiss mathematician (that got a distribution of two binary independent events named after him in the 17th century. When there are only two possible results, H or T, or in other instances it might be displayed as a 0 or 1, success or failure, and so forth, each toss is a Bernoulli trial.
Here’s an example of two outcomes using the Canadian $1 dollar coin, lovingly referred to as the “Loonie” (common loon, named for the image you see on the right (tail); one of the most wonderful sounds from across a Canadian lake. Heads, at the left, are obviously Her Royal Highness, Queen Elizabeth II.
Figure 1. The sides of the one dollar Canadian coin AKA the “Loonie”. Note the Northern Loon at the right of QE II.
What might the results look like if we did three sets of experiments, shown in Figure 2? I’ve simulated and graphed the random outcomes of three experiments, but with only one coin in each Bernoulli trial. The first experiment having N=10 tosses, next N=100 tosses, and finally N=1000 tosses. The accuracy AND precision of our experiments should increase with increasing N: average outcomes converge toward 0.5 and smaller deviations from expectation. That appears to be the case: 40:60 when N=10, but 51.2:48.8 when N=100 or 1000. However, recall…these are only three realizations (N= 10, 100, 1000) of what we might observe when p=0.50, i.e. Heads=0.50 and Tails= 1–0.50=0.50.
Figure 2. Results , Head or Tail, after tossing a single coin either 10, 100 or 1000 times.
Consider flipping two coins simultaneously. A quick review reminds us that each toss of 2 coins must be independent, i.e. one coin toss does not influence, or interfere, with the outcome of the other: they are independent of each other. Next, we must state as with one coin the chance, or probability, for each outcome e.g. 50:50, 1:1, ½ and ½ , 0.50 and 0.50.
A bit of easy multiplication takes us to joint occurrences; we multiply the two independent probabilities to arrive at a joint probability of occurring. These are displayed in Table 3.
Table 3. Probabilities of outcomes when tossing two independent coins.
We can see in Table 3 that when we’re tossing two coins, the number of possible discrete outcomes increases to 4, i.e. (2)*(2)= 4 ═ HH, HT, TH, TT. Because the underlying distribution should now follow a binomial distribution where there are two independent events each having two possible alternatives, we can expand the binomial equation to the 2nd-power and thus calculate the expected values. Those are shown by the joint probabilities we see, above in Table 3:
The exponents (above)can be increased by larger integers and the coefficients altered following the rules laid out in Pascals Triangle.
Thus, the expected probabilities of the four outcomes with two coins become: 0.25 + 0.25 + 0.25 + 0.25=1, collecting terms we get all heads (HH) =0.25, a head and tail (HT+TH) =0.50 and only tails (TT) =0.25. The total sum of all probabilities must always sum to 1.0. Recall that in this sort of exercise the expectations are based upon doing the tossing experiments many times. Refer back to Figure 2.
We now have information about what the “expected values” are. What might we observe doing N trials? Will the expectations be met?
Let’s simulate an experiment. Toss two independent coins 100 times, each . We now know that the expected outcomes are: HH=50, HT=50, TH=50 and TT=50, totaling N=200.
Table 4. Chi-square analysis after randomly flipping two independent coins, n=100 tosses per coin (N=200). Chi-square (X), with 3df =4.4002 at P~0.25 leaves us to believe that the two coins did not significantly deviate from random expectations. Our “null” hypothesis cannot be rejected at a reasonable level of certainty. * Observed counts= N Ob, Expected= N Ex
Hopefully this simple exercise has refreshed some memories about how very straightforward experiments can result in random underlying outcomes that tell us something about the reality of our experiences on a daily basis. If observed outcomes deviate significantly from what we knew to expect we have important new knowledge — — What caused the aberrant result? We now have the beginnings of scientific exploration. Now, for a quick jog to your memory…do the genders of children in a family always arrive in 1:1 ratios? No, not always the case in real-life families but if we tabulate over many births we should observe approximately ½ females and ½ males. Might that be no different than many tosses of a coin, e.g. Figure 4? What does it mean if sex ratios are not equal? We’re coming to that topic in the next post that will explore more that sex ratios. I hope that you will share my enthusiasm about the Beauty of 1/2. .
Originally published at https://medium.com on November 30, 2021.