In one of my projects, I need to know what is the minimum number of bytes needed to store a given integer. The Obviously Correct answer, ignoring negatives and the special case of 0, is something like ...
1 + int(log($data) / log(256))
I'm lucky that my test suite included a value of
$data that is an exact multiple of 256 (which is likely to expose floating point errors in that calculation), and doubly lucky that the particular value in the test suite had a floating point error in the right direction so that the test actually failed, because ...
$ perl-5.30.2/bin/perl -E 'say 1 + int(log(0x1000000) / log(256))' 4 $ perl-5.30.2-quadmath/bin/perl -E 'say 1 + int(log(0x1000000) / log(256))' 3
Ouch! The first does the calculation using a 64 bit IEEE754 float, the second uses gcc's
My thanks to the CPAN-testers for making my tests fail!