, 1 min read
Is multiplication slower than addition?
Earlier, I asked whether integer addition was faster than bitwise exclusive or. My tests showed no difference, and nobody contradicted me.
However, everyone knows that multiplication is slower than addition? Right? In cryptography, there are many papers on how to trade multiplications for additions, to speed up software.
So? Can you predict which piece of code runs faster?
scalar product (N multiplications):
for(int k =0; k < N ; ++k)
answer += vector1[k] * vector2[k];
scalar product two-by-two (N multiplications):
for(int k =0; k < N ; k+=2)
answer += vector1[k] * vector2[k]
+vector1[k+1] * vector2[k+1];
non-standard scalar product (N/2 multiplications):
for(int k =0; k < N ; k+=2)
answer += ( vector1[k] + vector2[k] )
- ( vector1[k+1] + vector2[k+1] );
just additions (no multiplication):
for(int k =0; k < N ; ++k)
answer += vector1[k] + vector2[k];
Answer: Merely reducing the number of multiplications has no benefit, in these tests. Hence, simple computational cost models (such as counting the number of multiplications) may not hold on modern superscalar processors.
My results using GNU GCC 4.2.1 on both a desktop and a laptop:
algorithm | Intel Core i7 | Intel Core 2 Duo |
---|---|---|
scalar product | 0.30 | 0.39 |
scalar product (2×2) | 0.25 | 0.39 |
fewer multiplications | 0.25 | 0.39 |
just additions | 0.16 | 0.23 |
Times are in seconds. The source code is available without pointer arithmetics. The same test with pointer arithmetics gives faster results, but the same conclusion. I tried a similar experiment in Java. It confirms my result.
Code: Source code posted on my blog is available from a github repository.