An implicit assumption is that the only thing one can do with numbers is count physical objects. (Actually, that would be two assumptions, but who’s counting?) This assumption directly contradicts my experience.
Besides, it is impossible to represent most integers in the range between 0 to 10308 with float64.
I did not write that binary64 was good enough for all purposes. That’s not what I believe.
Brian Kesslersays:
I think it would be more accurate to ay you can represent the magnitude of the number of atoms in the universe. A double only has 53 bits of precision so you can’t use it to “count” that high, but you can represent the leading 53 bits (~17 decimal digits) of a number that large.
But yes, if you exceed the range of a double, you likely have an issue with your calculation such as poor choice of units.
traskisays:
*Observable universe
Marcossays:
Nobody will ever make a machine with 2^53 parts either, so for counting a double is enough.
The entire problem with floating point numbers is loss of precision on calculation. That one is a completely mathematical phenomenon, so pointing at physics is missing the point, and errors do accumulate on superlinear fashion, so that large mantissa is way less useful than it looks like.
There is a reason why quad-precision floats exist.
Again: this post was not a defence of binary64 in general.
Christopher Changsays:
It’s worth noting that this rule of thumb is not true in the other direction: likelihood values between 0 and the smallest positive value representable by a double (~5 * 10^{-324}) frequently show up. This can sometimes be worked around by normalizing against the likelihood of a specific event, but library support for log-likelihoods is very valuable.
An implicit assumption is that the only thing one can do with numbers is count physical objects. (Actually, that would be two assumptions, but who’s counting?) This assumption directly contradicts my experience.
Besides, it is impossible to represent most integers in the range between 0 to 10308 with float64.
I did not write that binary64 was good enough for all purposes. That’s not what I believe.
I think it would be more accurate to ay you can represent the magnitude of the number of atoms in the universe. A double only has 53 bits of precision so you can’t use it to “count” that high, but you can represent the leading 53 bits (~17 decimal digits) of a number that large.
But yes, if you exceed the range of a double, you likely have an issue with your calculation such as poor choice of units.
*Observable universe
Nobody will ever make a machine with 2^53 parts either, so for counting a double is enough.
The entire problem with floating point numbers is loss of precision on calculation. That one is a completely mathematical phenomenon, so pointing at physics is missing the point, and errors do accumulate on superlinear fashion, so that large mantissa is way less useful than it looks like.
There is a reason why quad-precision floats exist.
Again: this post was not a defence of binary64 in general.
It’s worth noting that this rule of thumb is not true in the other direction: likelihood values between 0 and the smallest positive value representable by a double (~5 * 10^{-324}) frequently show up. This can sometimes be worked around by normalizing against the likelihood of a specific event, but library support for log-likelihoods is very valuable.