Cover Photo by Tony Tran on Unsplash
Source:
About binary and numeric representation
Computers don't have ten fingers to count on - they use a system called binary, which functions very similarly to our base-ten number system, but with only two digits (0 and 1) instead of ten (0 through 9).
Some resources if you need a quick lesson on how this works:
Trevor Storr - How to count in binary
Khan Academy - Adding in binary
Low Level JavaScript - Floating point explained
9999999999999999
evaluates to 10000000000000000
This is not specific to JavaScript by any means! It's another consequence of how computers handle numbers and math.
💡 About how computers represent numbers
When you think of a big number, your brain has an advantage over computers. You don't need to reserve a certain amount of space in order to think about that number; you can be as precise or imprecise as you want, without your brain crashing. Probably.
In order for a computer to think about a number, it first needs to know roughly how large and how precise it is, so it can reserve the right amount of space for it in its memory.
Because of how computers represent numbers, if you have a number with a lot of "information" in it (very large, very precise, or both), the computer might run out of "space" for that number. In some circumstances, this just throws an error (the computer gives up). In JavaScript and many other languages, the number is reduced to a version that has less "information" to store.
JavaScript has a maximum number, over which it cannot guarantee precision for integers: Number.MAX_SAFE_INTEGER
. You can get a value for this in the console:
> Number.MAX_SAFE_INTEGER
> 9007199254740991
And as you can see, our value of 9999999999999999
is over that threshold, so it will not be guaranteed to be accurate.
0.5 + 0.1 === 0.6
, but 0.1 + 0.2 !== 0.3
In JavaScript, when you add 0.1
and 0.2
together in the console, here's what you get:
> 0.30000000000000004
Almost - but not exactly - 0.3.
This is another consequence of how computers handle numbers, and is again not specific to JavaScript - in fact, any language that uses floating point decimals will have this problem.
When you tell a computer about a base-10 number, like 0.5
(one half), it internally is thinking about this number in binary as 0.1
(not one-tenth, since in binary that's the "halves place" and not the "tenths place").
Let's convert all these numbers to binary:
-
0.1
in base ten =>0.00011[0011...]
in binary -
0.2
in base ten =>0.00110[0110...]
in binary -
0.3
in base ten =>0.01001[1001...]
in binary -
0.5
in base ten =>0.1
in binary -
0.6
in base ten =>0.10011[0011...]
in binary
You may notice that a lot of those contain infinitely repeating decimals. Computers aren't so great with infinite things, so they just write down however much they can, and stop when they run out of space to store the number.
These infinitely repeating decimals are the core of this particular issue.
Let's look at adding 0.1
and 0.5
, which is easy even for us humans, since 0.5
has such a tidy representation in binary.
(The logic for addition in binary is the same in base ten, except you "carry" when you get to two rather than ten. Luckily, this problem doesn't involve any carrying at all!)
0.00011[0011...] // 0.1
+ 0.1 // 0.5
= 0.10011[0011...] // 0.6
For this one, you could cut off the repeating decimal at any point and the value of 0.1 + 0.2
would have the same representation as 0.3
. We can tell this easily because one of the numbers does not infinitely repeat. Therefore, the infinite part of the result must come from the infinite part of the 0.1
.
Now let's try 0.1
plus 0.2
(You do need to learn how to carry for this one, sorry):
0.00011[0011...] // 0.1
+ 0.00110[0110...] // 0.2
= 0.10011[0011...] // 0.3
These are the same if you have the ability to think about infinitely long numbers. But what about a computer? They need to cut off the repeating decimals at some point.
Let's take another look at what happens if we cut it off at, say, seven decimal places:
0.0001100 // approximately 0.1
+ 0.0011001 // approximately 0.2
= 0.1001101 // close to 0.3, but...
0.1001100 // this is what 0.3 actually looks like when approximated to 7 decimal places!
💡 All languages that use floating point numbers will have some form of this problem.
Those languages may not face this exact issue - it depends on where exactly the program gives up keeping track of infinitely repeating decimals.
typeof NaN
is "number"
This is partially a language decision, but also partially due to how computers don't do math quite like humans do.
NaN
(Not a Number) is typically seen when a numeric system encounters non-numeric data (for example, parseInt("a")
). However, it is defined in the IEEE764 floating point spec as any numeric concept or value that doesn't represent a single real number, such as:
- A number with a complex component (as in 2 + 3 i, where i is the square root of -1)
- A result of a numeric operation of numbers that has an indeterminate result
- A result of an invalid mathematical operation
- The result of a mathematical operation where one of the operands is
NaN
JavaScript's Number
type is based on this floating point specification, so it makes sense that the type also includes NaN
. Number
also includes non-numbers that are more conceptual, such as Infinity
.
If you think of Number
as "math stuff", it probably makes more sense.
...I do agree that it's hilarious, though.
Math.max()
is -Infinity
and Math.min()
is Infinity
These functions both typically take list of arguments, such as:
Math.max(2, 3, 4); // returns 4
Math.min(2, 3, 4); // returns 2
So these are basically the functions being called with an "empty array" of arguments.
If you take the maximum or minimum value of an empty list, what would you expect the result to be? The question itself - what is the smallest value in an empty list of things? - sounds like a question a Zen master would ask. The answer might not have any meaning at all, so it falls to us to make sure it's at least practical.
Let's go over this one by example - if you had an array of numbers (in any language), what sort of algorithm would you use to find the smallest or largest? A bit of pseudo-code for the most common algorithm to replicate Math.max
:
given LIST OF ARGUMENTS
variable RESULT
for (each ITEM in the LIST OF ARGUMENTS):
is the ITEM greater than the current RESULT?
if so, replace RESULT with the ITEM's value
otherwise, do nothing; continue on to the next ITEM.
return RESULT
The question here is - what should the initial value of RESULT be? There are two intuitive answers that work well:
- It should be an empty value such as
null
orundefined
. This means that you have to update the algorithm a little bit to account for the empty value, and people who use this will need to note that the return value may be empty or zero, and they will need to be careful to not confuse the two.* - It should be something that is always "less than" any finite number. in JavaScript,
-Infinity
satisfies that logic. The logical consequence to this is that if there is nothing in the list of arguments, the-Infinity
will be returned.
* If you've ever coded in PHP, this likely brought back traumatic memories. I'm sorry.
Both of these work just fine, and the only thing that's affected is the (probably rare) case of being called with an empty list of things to compare, and some extremely minor performance differences. Not an important choice in the long run, so I don't blame JavaScript for this one. At least we don't have to check the return value for null
every time.
You made it to the end! Thanks for sticking around. Here's a picture of a kitten in a bowtie to show my appreciation (via reddit).
Top comments (0)