DEV Community

Ashutosh Kumar
Ashutosh Kumar

Posted on • Originally published at hackernoon.com

You Could Be Wrong About Probability

Representative image for article showing two dices

Probability has always fascinated me. It makes the hidden backbone of Machine Learning and Artificial Intelligence. I had the opportunity to study it in school and college. But it wasn't until I took up courses on Bayesian Statistics that I realized how wrong my understanding was about it.

You might have studied probability in school or college. You might have come across the question, "What's the probability of getting heads on tossing a coin"? If your answer is 1/2, think again. It's where it gets interesting.

Mathematics is generally viewed in light of being "consistent". We assume that a problem would always have the same solution no matter how we solve it. It's true except for when it comes to probability. Probability stands like an exception in that it has three different definitions or frameworks. Approaching the same problem with these definitions could yield different (and valid) answers.

To showcase the same, let's consider the following problem. We will solve it using all three frameworks of Probability. One thing common across all the frameworks is that the total probability of all the outcomes of an experiment is always 1.

"My friend Sovit gave me a coin. He didn't tell me if the coin is fair or not. What's the probability of getting heads on this coin?"

Classical Framework

It's the simplest framework in probability. It's also the easiest to understand.

The classical framework says that "Equally likely outcomes have equal probability".

In the above problem, we don't know if the coin is fair. We cannot say if getting heads is equally likely as getting tails. So, we cannot solve this problem using the classical framework.

But to showcase the usage of this framework, let's assume that the coin is fair. It means that getting heads is equally likely as getting tails. Since these are the only two possible outcomes and the total probability is 1, the probability of getting heads is 1/2.

The classical framework might look rudimentary but it's also the most abused framework. Arguments like "Either there's life on Mars or there isn't and so the probability of existence of life on Mars in 1/2" are wrong. Because the classical framework only works when outcomes are equally likely. In this case, the existence and non-existence of life on Mars are not equally likely.

Frequentist Framework

It's one of the most used frameworks in probability. If you have solved any problem in probability, you might have likely used the frequentist framework to do so.

The frequentist framework says that to compute the probability of an event, we need to conduct an experiment and observe the outcome. Repeat the experiment an infinite number of times. And, the probability of the event is P(E) = Count(favorable outcomes) / Count(total outcomes).

In practice, we cannot conduct an experiment an infinite number of times. So, we do it a finitely large number of times. For our problem, let's conduct the experiment 10 times. Let's assume that we got 6 heads and 4 tails. So, the probability of getting heads is 0.6.

The frequentist framework also has limitations. Consider the problem to find the probability of rain tomorrow. By definition, we need to have an infinite number of parallel universes. Then we would need to observe the tomorrow in each of these universes and count the ones where it's raining. But, it's not possible. Besides, why would we compute the probability of rain tomorrow if we can observe tomorrow?

Bayesian Framework

It's one of the most used frameworks in probability. It's also the easiest to understand but difficult to work with.

Bayesian framework says that the probability of an event is what you think it is. It's more about your personal perspective. You are watching cricket and Sachin Tendulkar is at 94. You exclaim there's a 90% chance that he would hit a century. That's your Bayesian probability of the event.

So far in the above two frameworks, we have missed focusing on other key information in the problem: "My friend Sovit gave me the coin". Sovit is my friend and I know him. He has given me other coins in the past. Let's say that those coins had a probability of 0.4 of turning heads. It's called "prior" information. The above two frameworks don't have any way of using it. It's where the Bayesian framework shines. It allows us to use both the prior information and the data, unlike the frequentist framework that relies only on data. We will have to assume how much we trust our prior and how much we trust our data. Let's say we trust both 50% (called weights). The probability of heads then would be a weighted average of prior and data: 0.5 * 0.4 + 0.5 * 0.6 = 0.5.

The Bayesian framework can provide more realistic answers by utilizing prior information. But, we have to make assumptions on weights. This is the critical point of criticism. Since we make assumptions, it is possible to skew the results based on our biases.


So, that's all about probability and it's different frameworks. Let me know in the comments if this blew your mind as it did mine. Give me some claps if you liked the article.

Resources

Top comments (0)