I did read your blog post back then, I’m sure, but I never connected it with what I see in research papers.
Anonymoussays:
Hi Daniel, love your blog.
I see your point, but…
I just did a Google search for ‘fish’, the results… “About 359,000,000 results (0.17 seconds) ”
Suppose Google told me that they could make it 100 times faster, just 0.0017 seconds!
I really would not care, for me, in this context there is no difference between 0.17 seconds and even 0.00000000000017 seconds.
Of course, you might argue that if I build a crawler can call google a million times, then I would care. This is true, but there really are papers that make similar claims in domains for which we just don’t need speedup.
One example is a paper on a faster way to do a calculations on human ancestor remains. They had a speed-up of a factor of two. However, every prehistoric human ancestor remain we have could comfortably be placed in a small suitcase. Making the algorithm faster was polishing the wrong apple, we just don’t need to speedup that problems.
When people talk about the improving or comparing any algorithm the only meaningful way to present it is the Pareto front. I learned about it too late, possibly should put blog post about it.
Muhammad Alkarourisays:
On the other hand, beware of the fallacy of relative numbers: if my web site has the fastest growth in access, that is may be because it went up from 1 access (myself) to, say, 100. This is usually clear with the rate of adoption of, say, new browsers.
Frederick Mosteller coined the term numerator-only data for things like this.
You’ve got to love blogging! Thanks!
I did read your blog post back then, I’m sure, but I never connected it with what I see in research papers.
Hi Daniel, love your blog.
I see your point, but…
I just did a Google search for ‘fish’, the results… “About 359,000,000 results (0.17 seconds) ”
Suppose Google told me that they could make it 100 times faster, just 0.0017 seconds!
I really would not care, for me, in this context there is no difference between 0.17 seconds and even 0.00000000000017 seconds.
Of course, you might argue that if I build a crawler can call google a million times, then I would care. This is true, but there really are papers that make similar claims in domains for which we just don’t need speedup.
One example is a paper on a faster way to do a calculations on human ancestor remains. They had a speed-up of a factor of two. However, every prehistoric human ancestor remain we have could comfortably be placed in a small suitcase. Making the algorithm faster was polishing the wrong apple, we just don’t need to speedup that problems.
When people talk about the improving or comparing any algorithm the only meaningful way to present it is the Pareto front. I learned about it too late, possibly should put blog post about it.
On the other hand, beware of the fallacy of relative numbers: if my web site has the fastest growth in access, that is may be because it went up from 1 access (myself) to, say, 100. This is usually clear with the rate of adoption of, say, new browsers.