DEV Community

Bill C
Bill C

Posted on • Edited on

Javascript's Numbers Are Weird but You Can Live With Them If You Understand Them

Recently a friend ran into a pretty perplexing problem. They were storing IDs in the client as javascript numbers, and they started to see a difference between what was being sent by the backend and what was being seen after parsing the AJAX response. It wasn't a huge difference, but any difference was causing problems (as you'd expect).

What was going on?

To work it out, you need to understand how javascript represents numbers.

Javascript's Numbers Are a Little Weird

You can do the below calculations in your browser's console.

0.1;
// 0.1
0.1 + 0.1; 
// 0.2
0.1 + 0.2;
// 0.30000000000000004

Some folks get a laugh out of this dumb error, pointing out that computers are still kinda garbage. I even heard a guy joke that this error was why Matt Damon got stranded on Mars a few years back.

Now, fun is fun, but lets explore.

var martianError = function(){
  var epsilon = 1;
  while(((1-epsilon) !== 1) && ((1+epsilon) !== 1)){
    epsilon *= 0.5;
  }
  epsilon *= 2; // little bit more error for the hell of it (more or less).

  var maxDistanceFromEarthToMars = 410000000000;
  var distanceError = epsilon * maxDistanceFromEarthToMars;

  console.log("If you represent the distance to mars with "
            + "IEEE754 on this machine, you are out by "
            + distanceError + " metres, which is "
            + distanceError*1000+ " millimetres.");

  var thinPaperThickness = 70*0.000001;
  var thickPaperThickness = 180*0.000001;

  if(distanceError > thickPaperThickness){
    console.log("This is more than the thickness of thick paper.");
  } else if(distanceError < thinPaperThickness){
    console.log("This is less than the thickness of thin paper.");
  } else {
    console.log("This is about the thickness of a piece of paper.")
  }

  var averageDiameterOfHumanHair = 0.000100;
  var errorInHairWidths = distanceError / averageDiameterOfHumanHair;

  if (errorInHairWidths < 1) {
    console.log("Its less than the average diameter of a human hair.")
  } else {
    console.log("Its about " + errorInHairWidths.toFixed(0) + " human hair diameters.");
  }
}

martianError();

If you copy and paste that into the js console (and you're using an up to date chrome client), you'll find out how little error there is when we represent quantities in javascript.

So, sometimes numbers in javascript do stuff that is kind of stupid, and they're crazily precise at the same time. How can both of those be true?

Like every tool you use, to use it well, its a good idea to try and find out what problem it was designed to solve, the strengths and weaknesses of its design, and to clearly have in mind what you're trying to do with it.

Lets start by understanding what javascript numbers are.

Javascript's Numbers Are IEEE 754 Floating Point Numbers

Whenever javascript is representing a number, the computer uses a format for that number called IEEE 754.

The IEEE is the Institute of Electrical and Electronics Engineers, and its objectives are the educational and technical advancement of electrical and electronic engineering, telecommunications, computer engineering and allied disciplines. It has a standards association, and they periodically create standards. They mostly describe key aspects of modern computing hardware, and how modern floating point numbers work was standardised in the 80's in IEEE 754.

Floating point numbers are the computers attempt to describe quantities like π or the square-root of 2 as well as measurements of things, like the distance to mars and the width of a hair, and the outcomes of calculations using these quantities. They do it in a complicated way using a sign bit, a base, an exponent for the base, and some digits, or rather bits, of the number, which are called the mantissa. Used together, these describe magnitudes in much the same way that scientific notation does. On your computer now, you probably have 11 bits used to describe the exponent, and 52 bits to describe the mantissa (which is where the precision of the representation comes from).

The representation of 0.1 looks like

0 01111111011 1001100110011001100110011001100110011001100110011010

The first bit is saying that the number is positive, the next 11 say that its between a sixteenth and an eighth, and the rest of the bits are there saying where exactly it is between those two powers of two, using binary (there's actually a hidden extra bit, making 53 a magic number, and the rounding is done in a smart way, but these are technicalities). You can experiment with how other numbers are represented here.

The biggest reason I can be so confident about how your floating point numbers are represented is because the IEEE 754 standard was so successful. When it came out a lot of computing was about the scientific modelling of complex physical and economic systems. There were massive mainframe computers being designed and sold and they had different ways of representing floating point numbers, and new and different floating point architectures were frequently being designed. Sometimes good choices were made in these designs, and sometimes bad.

Mathematics libraries had to be checked against, and frequently rewritten for, every new architecture, and there were't any guarantees that your calculations and mine would end up with similar conclusions (especially when calculations dealt with a thing called underflow).

IEEE 754 made good architectural choices so the designers of new mainframe architectures could worry about implementing the standard, rather than working out a new floating point system. Code involving complicated calculations could be shared between researchers on different systems with the certainty that the architecture would do the right thing by the floating point calculations.

IEEE 754 made computing massively more productive. And then what we did with computers changed. We stopped using them for scientific calculation and started using them for money and social media and MMOs and probably dozens of other things besides. IEEE 754 wasn't built for these, but it was still a pretty good choice for a number system because it is good with 3D modelling, and can represent integers and fractions with one system.

But there are quirks.

Floating Point Numbers Aren't a Great Fit for Money or Internet Points or Database IDs

Remember my friend with the weird database ID problem? They realised that the values where the IDs started to get wonky was in the high 16 digit numbers. The smallest positive integer that 64 bit floating point numbers can't exactly represent is 253 + 1 which in decimal is 16 digits long and starts with a 9. My friends values were getting truncated to fit in the 52 bit mantissa of their IEEE 754 representation. So we can conclude that javascript's numbers are incompatible with the 64 bit integers that are used as IDs for most of our database tables.

Moral of the story: never store database IDs as numbers in the client. Pass them to the client as the decimal string representing the number. It probably won't bite you in the first year of operation, but its easier to do a little sensible planning before you're scratching your head on a really tough bug like this. Also, notice that you're never going to want to do a computation with an ID, so storing it as a number is unnecessary anyway.

What about money? 10 cents plus 20 cents is exactly 30 cents, not approximately 30 cents, and people are pretty picky about calculations and numbers involving money. I think people are okay with a bit of waiting on the server when they're purchasing, so my advice would be to do your monetary calculations in the back end, using a numeric type like ruby's BigDecimal, and to pass them to the frontend as strings. Other approaches, like working in integer representations of the smallest value you're interested in (and perhaps trusting that your numbers will be smallish) can also work. However, if that smallest value changes (suddenly the business wants to track tenths of a cent, for example) you'll be in for a rewrite of some pretty important business logic. Using a dedicated decimals library, on the other hand, is pretty future proof.

For things like game scores, or the number of likes a tweet has, or the number of seconds since you last posted on your timeline, you need to think through how much errors will be tolerated, how important the values are to people, and how big those values are likely to get (getting more than 253 likes on a tweet might be possible one day, but its not going to be tomorrow). Weighing these is a lot easier now you know the limitations of javascript's numbers.

IEEE 754 is actually pretty cool

If you had said to people working during the 1980s on mainframes doing massive scientific modelling tasks that folks would do masses of commerce on computers that fit in their pockets, they would probably have been at least a little surprised.

If you went on to describe the values social media tracks, and MMO points, and all the numbers that our modern web apps track, they would have been even more surprised and they might have said "floating point numbers probably aren't a great fit for those purposes". If you'd mentioned using them to track database IDs they would have had questions about how many bits were in the integers and the floating point numbers, and quickly concluded that you were heading for a problem.

These days our lives are a bit simpler. The vast majority of integers are 64 bit integers (still sometimes called long) and our floating point numbers are all IEEE 754 and more often than not, 64 bit (called double sometimes). But a language which doesn't give you a separate numeric type for integers, like javascript, makes life a little tricky when you try to represent large integers in that language.

One thing that floating point numbers are great for is 3D graphics, which makes things like three.js and augmented reality apps possible, which are super cool. All the weather modelling and sundry other things that IEEE 754 have made possible have allowed humanity to progress in important ways and that's also super cool (in a less trendy way).

For the problem they were made to solve IEEE numbers are great. Its only when we try to stretch them outside of their use case, which javascript tempts us to do, that things fall apart. We can avoid that by leaning on the server when we need to, not using numbers when we don't have to, and giving people meaningfully truncated values when they don't need more than a dozen digits of precision.

Top comments (0)