Daniel Lemire's blog

, 6 min read

Why is 0.1 + 0.2 not equal to 0.3?

8 thoughts on “Why is 0.1 + 0.2 not equal to 0.3?”

  1. Louka Lemire says:

    -I’m very fast in maths!
    -Ok, what’s 56×38?
    -450!
    -That’s not right…
    -I said I was fast, not precise!

  2. foobar says:

    Computers can do computations the way human beings do. For example, WolframAlpha has none of the problems above because it uses symbolic computations.

    I actually wonder what WolframAlpha does internally, and if it works as neatly as you describe. For instance, if you input (0.1^(1/1000))^1000 without pressing enter, it shows an imprecise approximation (frankly, this could be a Javascript hack!), which would indicate it is at least not fully symbolic (for instance, it doesn’t replace 0.1 with 1/10, which would show up on other computations). Final results (after pressing enter) are impressively correct, though.

    WolframAlpha is based on Mathematica (or what they like to call “Wolfram Language”) which has traditionally taken a little different approach on numbers: it has exact values (effectively built up from integers, predefined constants and symbolic solutions built from these), approximate numbers with tracked precision, and machine-precision numbers.

    When you enter something like 0.1 + 0.2 in Mathematica, these numbers are machine-precision reals – effectively binary64 type. 0.1 + 0.2 == 0.3 returns True in Mathematica, but this is not because it would perform symbolic or decimal presentation arithmetic, but because Mathematica ignores couple least significant bits of the mantissa as it knows rounding errors are going to creep in, choosing different semantics (with different tradeoffs). (One can also evaluate 0.1 + 0.2 // InputForm in Mathematica and see that rounding errors indeed creep in on this computation.)

    I suspect WolframAlpha has some sort of heuristics to remove binary floating point kinks from the layperson user experience. What these heuristics precisely are is not immediately obvious to me. It definitely doesn’t straight away replace 0.1 with 1/10…

  3. Foobar-2 says:

    Scaled integers or rationals? In my view these are good approaches for this type of problem. Rational data types where a nice surprise when I started to use Haskell.

  4. Oren Tirosh says:

    The Android Calculator uses an internal number representation that is pretty much indistinguishable from real numbers for a human user.

    See this article by Hans Boehm:

    https://dl.acm.org/doi/abs/10.1145/3385412.3386037

    1. KWillets says:

      Adrian Colyer’s blog had a writeup on some recent work of his: https://blog.acolyer.org/2020/10/02/toward-an-api-for-the-real-numbers/

  5. James McCafferty says:

    COBOL allows one to define a variable as containing any arbitrary number of whole and decimal values, like all modern languages that dare to represent “business logic” should.

  6. Florent DUPONT says:

    Thanks for your article. I’m still wondering why, for some reason, Java is not consistent when computing floats and doubles.

    for instance :
    0.1d + 0.2d is not equal to 0.3d (as you explained in your article).

    But 0.1f + 0.2f (the same operation using float with a mantissa of 24 bits) IS equal to 0.3. Following the same logic it shouldn’t : 0.1 + 0.2 should be equal to 0.30000004.

    0.3 is internally represented as 0.3000000119
    0.1 is internally represented as 0.10000000149
    0.2 is internally represented as 0.20000000298
    so, 0.1 + 0.2 as 0.30000000447
    and the closed representable matching value shoud be 0.3000000417 , not 0.3000000119…

    Any ideas why this is inconsistent ?

    1. As 32-bit numbers, 0.1f is represented as 26843546*2**-28, so slightly over 0.1 (about 0.10000000149).

      0.2f is represented as 26843546*2**-27, so slightly over 0.2 (about 0.20000000298).

      0.3f is represented as 20132660*2**-26, so slightly over 0.3 (about 0.3000000119).

      If you were to assume that the sum is lossless, you would indeed expect about 0.30000000447034836, but when computing the sum, the processor rounds up to about 0.3000000119.

      Doing the computation manually, we get that the mantissa of 0.1f + 0.2f should be round(((26843546*2)+26843546)/4.0) = round(20132659.5) = 20132660 under round-to-even.