DEV Community

Daniel Chou Rainho
Daniel Chou Rainho

Posted on

[Unity] Encoding Data into Pixels (Part 2): Precision Considerations

32 bits.
That's how much data can be stored in a float.

In my previous post I noted the following:

In a 32-bit system, each bit can represent two possible values: 0 or 1. Since there are 32 bits in total, we can calculate the number of possible values by raising 2 to the power of 32: 2^32 = 4,294,967,296.

However, in our case, we are only considering floats between 0 and 1.

So the question becomes: How many numbers can you represent in a float between 0 and 1?

Theory

The total number of representable numbers between 0 and 1 in a 32-bit floating point format is a bit tricky to answer directly because the format doesn't distribute precision evenly across all possible values.

A 32-bit floating-point number follows the IEEE 754 standard for floating-point arithmetic. The standard specifies that a float is composed of 1 sign bit, 8 exponent bits, and 23 fraction (or significand) bits.

The significand provides the precision of the number. With 23 bits for the fraction part, we have 2^23, or about 8.39 million, unique values. But this doesn't mean that we can represent exactly 8.39 million unique values between 0 and 1.

The reason for this is because floating-point numbers distribute more precision close to 0 and less precision as values increase. This property allows them to represent very large numbers and very small numbers at the cost of precision.

So while there are roughly 8.39 million unique fractional parts, they are not evenly distributed between 0 and 1. Instead, there are far more representable numbers close to 0 and gradually fewer as you get closer to 1.

A precise count of the total representable numbers between 0 and 1 is not straightforward due to this varying precision, but know that there are far more than just 8.39 million due to the ability of the exponent to shift the location of the decimal point.

Example

Let's look at a simplified example using binary fractions, which are analogous to the fraction part of floating-point numbers.

In binary, a fraction is represented the same way as in decimal: each digit (bit) after the binary point represents a negative power of 2. So, the first bit after the binary point is 1/2 (2^-1), the second bit is 1/4 (2^-2), the third bit is 1/8 (2^-3), and so on.

If we only have 3 bits to represent fractions (as opposed to 23 in a real float), we can only represent 8 (2^3) unique fractions:

  1. 000 in binary -> 0 in decimal
  2. 001 in binary -> 0.125 in decimal (1/8)
  3. 010 in binary -> 0.25 in decimal (1/4)
  4. 011 in binary -> 0.375 in decimal (1/4 + 1/8)
  5. 100 in binary -> 0.5 in decimal (1/2)
  6. 101 in binary -> 0.625 in decimal (1/2 + 1/8)
  7. 110 in binary -> 0.75 in decimal (1/2 + 1/4)
  8. 111 in binary -> 0.875 in decimal (1/2 + 1/4 + 1/8)

Now, if you look at the range between 0 and 1, you will notice that these fractions are not evenly distributed. There are more representable numbers closer to 0 and fewer as you approach 1.

Top comments (0)