console.log(0.1 + 0.2); console.log(0.1 + 0.2 == 3); console.log(0.1 + 0.2 === 3);
What we have here seems to be relatively simple at first glance. The author creates three console logs. The first featuring an addition of 0.1 and 0.2, the second and third compare this addition to 0.3 with two different operators.
The first operator
== is called "Equal", the second
=== "Strict Equal". Both return a boolean value, indicating whether the first and second statement is the same. So, a
console.log either outputs
Well, the output is easy here, isn't it? It should be:
Welp, surprisingly enough, none of these is correct!
The two falses are evident in that context. Since the first output is this odd (pun intended) number, the addition is indeed not equal to
We end up with the fundamental question: why the hell
0.1 + 0.2 equals
Now, let's stay in the decimal (digits from 0 to 9). How'd you expect a computer to comprehend the number ⅓? It can't just think about it as a repeating decimal and note it down as one. What you need to do at one point is to cut it off. The last digit gets rounded, and that's it.
As you may already know, computers work in binary (digits from 0 to 1). Of course, the same problem exists there! Let's write down 0.1 in binary:
Mathematical evidence is beyond my article's scope, but I will add a link with detailed information on that.
- Trickery: Floating-point math and bicimals
- Key Learning: Floating-point numbers are not reliable when used without additional proofing
- Further Reading: