re: Language Features: Best and Worst VIEW POST

VIEW FULL DISCUSSION
 

One thing that's pretty similar to your "storing the formula used to calculate the number and then calculating the precision on-demand" idea that you have is exact real arithmetic. Several implementations exist for Haskell. One of the downsides of this approach, besides performance, is that equality is undecidable. The best you can do is determine that two numbers are within a certain distance of eachother.

 

Thanks for the link!

That's true, but it's also true with floating-point numbers in any programming language. Doing something like setting the default precision to the number of significant digits would eliminate this problem, I would think?

If you set x = 3.0 * 0.20 (= 0.60 @ 2 sig digits) and y = 0.599 * 1.0 (= 0.60 @ 2 sig digits) then y and x are equivalent when only significant figures are considered. Doing something like y - x would yield 0.599 * 1.0 - 3.0 * 0.20 = -0.001, which, to 2 significant figures, is zero. That's equality.

What do you think?

code of conduct - report abuse