Daniel Lemire's blog

, 24 min read

Fastest way to compute the greatest common divisor

34 thoughts on “Fastest way to compute the greatest common divisor”

  1. What I don’t get is that you have a speedup only on numbers of the form m—2^k and n—2^j, and the speed-up is proportional to min(j,k). How do you explain doubling the speed if asymptotically few pairs of numbers are of that form?

  2. lecteur habituel says:

    euclyd, not euler.

    Thanks for the post!

    1. Maths Brane says:

      Yea, I heart Euler, but this is Euclid, all day.

  3. Another excellent example of shaving off constants!

  4. They’re not quadratic, they’re O(lg min(a,b)).

    see:

    http://en.wikipedia.org/wiki/Euclidean_algorithm#Algorithmic_efficiency

  5. Per Persson says:

    “And someone ought to update the corresponding Wikipedia page.”

    Why don’t you do it yourself?

  6. Per Persson says:

    By the way, the numbers you used for testing are relatively small. More complicated algorithms are often slower for small numbers and don’t show their efficiency until the numbers are bigger. Without using anything bigger than uint32 you could test numbers of size ~1’000’000’000.

  7. Mike says:

    If you care about asymptotics, then both of these are quadratic. For a subquadratic algorithm, you need something like a half-gcd based algorithm.

  8. @Pigeon

    It is not necessary for the numbers to be divisible by two for them to benefit from the binary GCD.

    Take 3 and 5. After the first pass in the loop you get 3 and 2. The 2 gets back to 1 due to the ctz shift.

    The nice thing with the binary GCD is that it does not use any expensive operation (ctz is quite cheap on recent Intel processors) whereas the basic GCD relies on integer division.

  9. Mike says:

    They are quadratic when considering operations with integers that are larger than an unsigned int/long.

    For example, see: https://gmplib.org/manual/Greatest-Common-Divisor-Algorithms.html and https://gmplib.org/manual/Binary-GCD.html

  10. Hi Daniel, I can trim another 12% off your gcd() above by removing the two redundant shifts by “shift” of “u” and “v” that occur before the loop.

  11. @Ralph

    Well done. I have updated my blog post and credited you for the gains.

  12. KWillets says:

    I wonder if you could save a cmpl by reusing the u > v comparison for the loop break as well. That is:

    if( u == v)
    break;
    else if (u > v)

    This will shorten the last iteration and probably speed up the speculative execution.

  13. @KWillets

    With clang, your version is faster. With GCC, the version in the blog post is faster. The difference is within 10%.

    If I played with compiler flags, there might be other differences as well.

    In any case, your version is on github if you want to benchmark it.

  14. @Persson

    I have added a test in my code with large numbers but it makes no difference. Of course, these are word-size integers… results would differ with big integers.

  15. Hi again Daniel, I can save a further 7.5% on my earlier suggestion by altering the loop to

    do {
    unsigned m;
    v >>= __builtin_ctz(v);
    m = (v ^ u) & -(v < u);
    u ^= m;
    v ^= m;
    v -= u;
    } while (v);

  16. I have re-run tests with a version using the built-ins. The speed-ups are there indeed: 40% on larger numbers.

    http://hbfs.wordpress.com/2013/12/10/the-speed-of-gcd/

    (and @Mike I think the state of the art for fast division is O(n^log_2(3)), which is still more than linear, but subquadratic.)

  17. KWillets says:

    For my tweak the assembler output from gcc has the comparison, then a branch to the top of the loop, then the same comparison :(. The second comparison isn’t reachable by any other path either.

    Maybe some syntactic shuffling would trigger the optimization; I may give it a few tries later.

  18. KWillets says:

    This is faster on my version of gcc:

    {
    int shift, uz, vz;
    uz = __builtin_ctz(u);
    if ( u == 0) return v;

    vz = __builtin_ctz(v);
    if ( v == 0) return u;

    shift = uz > vz ? vz : uz;

    u >>= uz;

    do {
    v >>= vz;

    if (u > v) {

    unsigned int t = v;
    v = u;
    u = t;
    }

    v = v – u;
    vz = __builtin_ctz(v);
    } while( v != 0 );

    return u << shift;
    }

    Results:

    gcd between numbers in [1 and 2000]
    26.4901 17.6991 32.7869 25.974 24.3902 31.746 36.6972

    I was actually trying to get it to utilize the fact that ctz sets the == 0 flag when its argument is 0, so a following test against 0 should not need an extra instruction. However the compiler didn't notice. Instead it set up some interesting instruction interleaving so that the v != 0 test is actually u == v before the subtraction; I believe this is to enable ILP.

    Also, using an inline xchg instruction for the swap doubles the speed:

    gcd between numbers in [1 and 2000]
    26.1438 16.3934 33.6134 25.974 25.4777 30.5344 72.7273
    gcd between numbers in [1000000001 and 1000002000]
    26.1438 16 33.8983 25.974 25.3165 29.6296 72.7273

  19. @KWillets

    Thanks. I have added your code to the benchmark.

    Do you have the code for the version with the xchg instruction?

  20. @Ralph

    I added your version to the benchmark.

  21. KWillets says:

    Here’s the asm for the swap; I just replaced the part inside the brackets with xswap(u,v):

    #define xswap(a,b) __asm__ (\
    “xchg %0, %1\n”\
    : : “r”(a), “r” (b));

    Unfortunately I don’t understand if this is correctly defined (I copied it from some poorly-documented examples), but the assembler output looks good.

  22. @KWillets

    I have checked into github a version with your inline assembly (slightly tweaked to be more standard). It is not faster.

    When I ran your code “as is” I got failed tests.

    https://github.com/lemire/Code-used-on-Daniel-Lemire-s-blog/blob/master/2013/12/26/gcd.cpp

  23. KWillets says:

    Looking at Steven’s asm listings, I realized that my compiler was significantly behind, so I downloaded 3G of Apple “updates” last night. These results are now from clang-500.2.79.

    I started playing around with various ways of getting abs(v-u) (especially when unsigned) and also realized that bsfl(x) == bsfl(-x), so this works for the inner loop on gcdwikipedia5fast:

    do {
    v >>= vz;
    unsigned int diff =v;
    diff -= u;
    vz = __builtin_ctz(diff);
    if( diff == 0 ) break;
    if ( v < u ) {
    u = v;
    v = 0 – diff;
    } else
    v = diff;

    } while( 1 );

    If diff is signed 32-bit it's slightly faster, abs(diff) can be used, and the v < u test can be switched to diff < 0 for a slight gain. But it becomes a 31-bit algorithm. I haven't tried signed 64-bit yet.

    Using bsfl(diff) instead of v seems to speed it up significantly; it's probably ILP again since it doesn't have to wait for v to finalize.

  24. KWillets says:

    Hold on, I just tried signed 64-bit and got a huge boost:

    do {
    v >>= vz;
    long long int diff = v ;
    diff -= u;
    vz = __builtin_ctz(diff);
    if( diff == 0 ) break;
    if ( diff < 0 )
    u = v;
    v = abs(diff);

    } while( 1 );

  25. @KWillets

    I added these two alternatives to the benchmark.

    I find that results vary a lot depending on the compiler and processor. It is hard to identify a clear winner… except that they are all faster than the Euclidean algorithm with remainder.

  26. KWillets says:

    I checked the new revision and the 64-bit version (7) should use abs() and a few other edits.

    Should I be submitting edits to github?

  27. Taeseung Lee says:

    Thanks for the post!

  28. detailyang says:

    It’s cool and it’ faster 3x than mod in my golang implement

  29. George Spelvin says:

    It’s possible to slightly improve Ralph Corderoy’s branch-free code above (Dec. 28 comment) by using a difference delta rather than an xor delta.

    If you don’t mind limiting the input range to INT_MAX, the sign of (int)(v-u) can be used to control the swap:

    v -= u;
    mask = (int)v >> 31;
    u += v & mask; /* u + (v - u) = v */
    v = (v + mask) ^ mask; /* Conditional negate ~(v - 1) = -v */

    If you want to accept inputs up to UINT_MAX, it’s still possible to combine the subtract and mask formation with a bit of asm magic (x86 AT&T syntax) to get access to the carry flag:

    asm("sub %2,%1; sbb %0,%0" : "=r" (mask), "+r" (v) : "g" (u));

    Depending on the CPU, it may be worth spending an instruction to clear the mask to avoid a false dependency on its previous value. Add
    , “0” (0)
    to the end of the list of input parameters. (For those not familiar with GCC asm syntax, the 0 in quotes means that this input operand should be in the same register as output operand 0, the mask. The 0 in parens is the operand value. GCC will generate an xorl instruction to zero the mask.)

    1. Your proposal was added to the benchmark. Here are the numbers on my laptop:

      ❯ ./gcd
      gcd between numbers in [1 and 2000]
      Running tests... Ok!
      We proceed to report timings (smaller values are better).
      basicgcd                    54.0541
      gcdwikipedia2               23.5294
      gcdwikipedia2fast           66.6667
      gcd_recursive               54.0541
      gcd_iterative_mod           53.3333
      gcdFranke                   68.9655
      gcdwikipedia3fast           66.6667
      gcdwikipedia4fast           54.0541
      gcdwikipedia5fast           66.6667
      gcdwikipedia2fastswap       64.5161
      gcdwikipedia7fast           86.9565
      gcdwikipedia7fast32         85.1064
      gcdwikipedia8Spelvin        58.8235
      

      gcd between numbers in [1000000001 and 1000002000] Running tests... Ok! We proceed to report timings (smaller values are better). basicgcd 54.0541 gcdwikipedia2 23.6686 gcdwikipedia2fast 66.6667 gcd_recursive 53.3333 gcd_iterative_mod 54.0541 gcdFranke 68.9655 gcdwikipedia3fast 65.5738 gcdwikipedia4fast 54.7945 gcdwikipedia5fast 66.6667 gcdwikipedia2fastswap 64.5161 gcdwikipedia7fast 86.9565 gcdwikipedia7fast32 83.3333 gcdwikipedia8Spelvin 58.8235

      1. I’ve just noticed your benchmarks don’t change the number range between runs — the offset is used only in the tests. With the benchmarks corrected, the mod approach shows faster on the larger value range.

        Also noting the label on the results is misleading. The benchmark appears to be calculating ops per ms, not the timings directly, so larger is better.

        Results on my TGL-H laptop after corrections:


        gcd between numbers in [1 and 2000]
        Running tests... Ok!
        Kops/ms (larger values are better).
        basicgcd 50
        gcdwikipedia2 20.3046
        gcdwikipedia2fast 44.4444
        gcd_recursive 50
        gcd_iterative_mod 48.1928
        gcdFranke 46.5116
        gcdwikipedia3fast 45.4545
        gcdwikipedia4fast 61.5385
        gcdwikipedia5fast 44.9438
        gcdwikipedia2fastswap 43.0108
        gcdwikipedia7fast 48.1928
        gcdwikipedia7fast32 76.9231
        gcdwikipedia8Spelvin 64.5161

        gcd between numbers in [1000000001 and 1000002000]
        Running tests... Ok!
        Kops/ms (larger values are better).
        basicgcd 37.7358
        gcdwikipedia2 7.15564
        gcdwikipedia2fast 16.4609
        gcd_recursive 37.037
        gcd_iterative_mod 37.7358
        gcdFranke 16.5289
        gcdwikipedia3fast 16.3934
        gcdwikipedia4fast 22.3464
        gcdwikipedia5fast 16.4609
        gcdwikipedia2fastswap 16.3265
        gcdwikipedia7fast 18.3486
        gcdwikipedia7fast32 31.746
        gcdwikipedia8Spelvin 23.3918

        I’ve submitted a PR with the fixes

        1. Thanks. Here are my results after merging your fix.

          ❯ ./gcd
          gcd between numbers in [1 and 2000]
          Running tests... Ok!
          We proceed to report kops/ms (larger values are better).
          basicgcd                    40.404
          gcdwikipedia2               20.1005
          gcdwikipedia2fast           54.7945
          gcd_recursive               42.5532
          gcd_iterative_mod           42.5532
          gcdFranke                   38.0952
          gcdwikipedia3fast           53.3333
          gcdwikipedia4fast           42.1053
          gcdwikipedia5fast           53.3333
          gcdwikipedia2fastswap       55.5556
          gcdwikipedia7fast           67.7966
          gcdwikipedia7fast32         51.2821
          gcdwikipedia8Spelvin        46.5116
          gcd_mod_faster              43.956
          

          gcd between numbers in [1000000001 and 1000002000] Running tests... Ok! We proceed to report kops/ms (larger values are better). basicgcd 30.5344 gcdwikipedia2 7.28597 gcdwikipedia2fast 19.802 gcd_recursive 30.0752 gcd_iterative_mod 30.303 gcdFranke 14.0351 gcdwikipedia3fast 18.4332 gcdwikipedia4fast 14.2857 gcdwikipedia5fast 18.5185 gcdwikipedia2fastswap 18.7793 gcdwikipedia7fast 23.9521 gcdwikipedia7fast32 18.5185 gcdwikipedia8Spelvin 46.5116 gcd_mod_faster 32.5203

  30. Hakuna Matata says:

    Tested on random unsigned ints from full 32-bit range. `gcdwikipedia4fast()` is the fastest. Not every algorithm survived the test, btw.