DEV Community

Cover image for Why 0.1+0.2==0.3 False?
Shubham Tiwari
Shubham Tiwari

Posted on • Updated on

Why 0.1+0.2==0.3 False?

Hello Everyone today i will be discussing this famous question
Why 0.1 + 0.2 === 0.3 is false?

Let's get started...

I am using Javascript to show this but this comparison of floating point can be seen in other programming languages as well where they also return false for this comparison.

Code Example -

console.log(0.1 + 0.2 === 0.3)
//output - false
Enter fullscreen mode Exit fullscreen mode
  • It returned false but 0.1 + 0.2 = 0.3 then it why it didn't returned true?
  • Well These floating point numbers cannot be precisely represented in Base 2 floating point number.This Explaination of base 2 concept is quite complicated to show but i will show you the simple reason why it happened.
console.log(0.1 + 0.2)
// output - 0.30000000000000004
Enter fullscreen mode Exit fullscreen mode
  • The actual output of those 2 numbers is 0.30000000000000004 that's why it returned false because we are comparing this
console.log(0.3 === 30000000000000004)
// false
Enter fullscreen mode Exit fullscreen mode
  • Now you understand why it returned false.
console.log(0.1 + 0.2 === 0.30000000000000004)
//output - true
Enter fullscreen mode Exit fullscreen mode
  • This time it will return true

THANK YOU FOR CHECKING THIS POST ❤❤

You can contact me on -
Instagram -https://www.instagram.com/supremacism__shubh/
LinkedIn - https://www.linkedin.com/in/shubham-tiwari-b7544b193/
Email - shubhmtiwri00@gmail.com

^^You can help me by some donation at the link below Thank you👇👇 ^^
☕ --> https://www.buymeacoffee.com/waaduheck <--

Also check these posts as well
https://dev.to/shubhamtiwari909/js-push-and-pop-with-arrays-33a2/edit

https://dev.to/shubhamtiwari909/tostring-in-js-27b

https://dev.to/shubhamtiwari909/join-in-javascript-4050

https://dev.to/shubhamtiwari909/going-deep-in-array-sort-js-2n90

Top comments (4)

Collapse
 
joelbonetr profile image
JoelBonetR 🥇 • Edited

That's a recurring topic, and the reasons behind are quite interesting to me, if I must, let me add the complex stuff, I already tried to explain it in this comment -as best as I could and with the help of some guys from SO- that i'm copy-pasting because it doesn't let me embed it:



Just remember that is not a "Javascript issue" but an issue of the binary base we have most of the time.

In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa or significand.

A very simple number, say 9.2, is actually this fraction:
5179139571476070 * 2-49
Where the exponent is -49 and the mantissa or significand is 5179139571476070.

The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.

9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values (23=8 and 24=16).

Image description

This is information gathered from a pair of comments in SO with some annotations added in a vague try to make it more understandable, because honestly I don't feel myself skilled enough on this mathematical topics to explain it by my own without spending days in previous research and doing diagrams, examples and so.

You can check the original comments here.

Collapse
 
shubhamtiwari909 profile image
Shubham Tiwari

Yeah i also Don't have know that much about floating point numbers with Base 2 that's why i simply showed the reason in a coding way 😂😂

Collapse
 
manishanant2008 profile image
Ravi Vishwakarma

Yes, I also try it.
Binary floating point math is like this. In most programming languages, it is based on the IEEE 754 standard.

Collapse
 
shubhamtiwari909 profile image
Shubham Tiwari

Yeah but it's hard to explain to everyone 😂