In today's post, I will explain a particular situation which can occur in JavaScript if you use typed arrays.

This isn't my discovery, I found out in this blog post by Chris Wellons, but I found the quirk so interesting I wish to discuss it in my own post.

As you know, in JavaScript, all numbers are doubles, excluding two situations:

- Bitwise operations
- Typed arrays

In bitwise operations, the decimal section of the number is ignored.

In this example, the answer is `0`

, as the operation performed is actually `2 ^ 2`

.

Typed arrays are a bit similar, they store only various type of integers.

Here is an example using an Uint8 array, which can only contain 8 bit integers.

The result of these two logs is `123`

, and `200`

.

The `200`

might be unexpected, but as mentioned earlier, the arrays can only contain 8 bit unsigned integers.

The maximal value which can be stored by 8 bits is `255`

. As `456`

is bigger than `255`

, we cause an integer overflow, and start over at 0.

We can confirm this with the following example:

The result of this operation is `0`

, as we incremented `255`

to `256`

, therefore triggering an overflow.

As we overflowed by a single number, we start over at `0`

.

Now, let's get to the interesting quirk which I mentioned in the introduction.

As we already know, `255 + 1`

in a uint8 array is 0.

With this in mind, what would you expect to be the result of the following code?

```
const arr = Uint8Array.of(255);
const x = ++arr[0];
console.log(x, arr[0]);
```

The only difference between this code and the previous snippet is that we assign the result of the `++`

increment operator to a variable.

As the value of `arr[0]`

is `0`

, we would expect them both to be `0`

, right?

Let's find out!

As it turns out, the value of `x`

is `256`

, and not `0`

!

The reason behind this weird quirk is because of the types during the manipulations.

In Javascript, during regular arithmetic operations, we use the `Number`

type (And soon, BigInt!). As the increment operator is equivalent to `1 + [value]`

, both numbers are converted to `Number`

during the operation.

Once the operation is done, two things happen:

- We store the result of the operation in the
`arr`

array. - We store the result of the operation in the
`x`

value.

Notice how on step 2, we use the result of the operation instead of the value inside `arr`

!

As the result is the addition of two `Number`

, we didn't cause an integer overflow, and therefore our value is `256`

instead of `0`

!

Hopefully you found this quirk as interesting as I did!

If you wish to learn more about this, I suggest you to check out Chris's blog post, in which he compares the behavior with `C`

, as well as links to the exact Ecma spec where this is defined!

## Discussion

This is not as much of a quirk as it seems. ++ is supposed to return the result and assign it to the variable. It's neither supposed to evaluate the variable for the result nor to make the calculation in the same type as the variable.

I consider this a quirk for two reasons:

`C`

/`C++`

, the answer would be`0`

for both variablesThis is counter-intuitive.

Just because a language is specified different to C/C++ doesn't make it quirky. All of your example are just showing the specified behavior of any flavour of modern ECMAscript.

I would only consider this a quirk if the standard operators suddenly changed the specified behavior to handle typed arrays differently, i.e. if you assigned the result to another field of a Uint8Array and the results would differ or if assignments to untyped variables suddenly exposed the same overflow as the typed array ones.