Daniel Lemire's blog

, 2 min read

Data scientists need to learn about significant digits

3 thoughts on “Data scientists need to learn about significant digits”

  1. John the Scott says:

    most excellent post. i recommend gustafson’s book for another angle on digital error.

    https://www.amazon.com/End-Error-Computing-Chapman-Computational/dp/1482239868/ref=sr_1_1?s=books&ie=UTF8&qid=1548866338&sr=1-1&keywords=the+end+of+error

  2. ttoinou says:

    Of course you’re right.

    If you’re exchanging information with scientists / engineers you could also provide with every F figure its ±P “precision” (Y% of chance to be in the Gaussian centered on F with k(Y)*P standard-deviation, k to be computed from Y). That way if the person you’re giving information to needs to compute a new statistic, it can combines Gaussian models and have a new (F’ ± P’)

  3. Michael Nelson says:

    I would add to the statement “serious people will not be so easily fooled.” When I see such precision in reduces my confidence in the source. My internal “bozo” warning light comes on.
    I had the concept of significant digits pounded in to my head by my (very excellent) high school science teachers. Now I have an aversion to over-precision.