Daniel Lemire's blog

, 5 min read

Number of atoms in the universe versus floating-point values

7 thoughts on “Number of atoms in the universe versus floating-point values”

  1. Albert says:

    An implicit assumption is that the only thing one can do with numbers is count physical objects. (Actually, that would be two assumptions, but who’s counting?) This assumption directly contradicts my experience.
    Besides, it is impossible to represent most integers in the range between 0 to 10308 with float64.

    1. I did not write that binary64 was good enough for all purposes. That’s not what I believe.

  2. Brian Kessler says:

    I think it would be more accurate to ay you can represent the magnitude of the number of atoms in the universe. A double only has 53 bits of precision so you can’t use it to “count” that high, but you can represent the leading 53 bits (~17 decimal digits) of a number that large.

    But yes, if you exceed the range of a double, you likely have an issue with your calculation such as poor choice of units.

  3. traski says:

    *Observable universe

  4. Marcos says:

    Nobody will ever make a machine with 2^53 parts either, so for counting a double is enough.

    The entire problem with floating point numbers is loss of precision on calculation. That one is a completely mathematical phenomenon, so pointing at physics is missing the point, and errors do accumulate on superlinear fashion, so that large mantissa is way less useful than it looks like.

    There is a reason why quad-precision floats exist.

    1. Again: this post was not a defence of binary64 in general.

  5. Christopher Chang says:

    It’s worth noting that this rule of thumb is not true in the other direction: likelihood values between 0 and the smallest positive value representable by a double (~5 * 10^{-324}) frequently show up. This can sometimes be worked around by normalizing against the likelihood of a specific event, but library support for log-likelihoods is very valuable.