Daniel Lemire's blog

, 17 min read

The five most important algorithms?

21 thoughts on “The five most important algorithms?”

  1. Peter Turney says:

    My favourites in the original list and in your subset are hashing and SVD. But both of these are perhaps better described as data structures or representations, rather than algorithms. Linked lists (for sparse matrices, trees, graphs, etc.) are another very useful invention of this type. This is a rather subjective question, but perhaps data structures (representations) are more important (interesting, fundamental) than algorithms? Of course, a data structure isn’t much use without an algorithm, and vice versa.

  2. Peter Turney says:

    Another example: floating point numbers. We take them for granted, but there is a lot of engineering behind IEEE 754.

  3. John says:

    Some of his algorithms are not specific algorithms but classes, sometimes enormous classes, of algorithms. For example, methods for solving linear systems of equations.

    The simplex method is important historically, but I assume it is on the list as a placeholder for the set of convex optimization methods that have replaced it in practice.

    I’d definitely put methods for solving linear systems in the top five. These algorithms are important in their own right, but they’re also part of other algorithms, such as convex optimization.

  4. Would you consider the Expectation Maximization and Metropolis-Hasting to be algorithms? (Even though not invented within computer science?)

  5. Philippe Beaudoin says:

    I agree with your list. Although I want to point out my most recent “ahah!” moment: the Wheeler-Burrows transform. http://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform

    Like you, I didn’t learn SVD until grad school. The best source I found for learning it on one’s own is: http://www.uwlax.edu/faculty/will/svd/index.html

    As far as such good learn-it-by-yourself-doc goes, thumbs up also to Jonathan Richard Shewchuk for his introduction to the conjugate gradient method:
    http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf

  6. Ragib Hasan says:

    My choices are:

    1. Quicksort (though I find it ugly and difficult to explain to someone 🙁 )

    2. Dijkstra’s algorithm (Shortest path)

    3. Euclidean algorithm for GCD

    4. Strassen’s algorithm for Matrix multiplication

  7. Richie says:

    Good list but I think the RSA Encryption Algo and the Diffie Hellman Key exchange do need a mention 🙂

    I would add two more to my list, the data compression and the viterbi algo 🙂

  8. Guru Kini says:

    Great list.
    I would add Quicksort, Public-Key Cryptography and LZ Compression algorithms to the list.

  9. Nicholas says:

    Simple gradient descent manybe? After all it is the basis of many first and second order optimization methods

  10. ponyo2 says:

    I would rate the following algorithm families as more important than all of the algorithms on Koutschan’s list:

    – linear search: I think this is the single most *important* algorithm

    – numerical addition, subtraction, multiplication, division

    – random number generating algorithms

    – stack-based algorithms, such as the use of a call-stack in (almost every) modern programming language

    One other note: I’ve always been struck by the paucity of real uses of binary search, especially as compared to things like linear search, stacks, or hashing. The fact that it only works on perfectly sorted data makes it nearly useless in practice — too beautiful for this ugly world, as the poet might say. It’s mainly a nice introduction to the idea of divide-and-conquer.

  11. A couple of quick remarks…
    1) the simplex algorithm is still widely used. Despite the better worst-case error bound, interior point methods perform better only on some classes of problems. AFAIK it is still an open issue to find a heuristic saying when it is better to use one rather than the other.
    2) On the contrary, Strassen’s algorithm is of theoretical interest, but AFAIK mostly unused in practice due to its numerical instability.

    Oh, and another interesting view on the subject: here http://view.eecs.berkeley.edu/wiki/Dwarfs is a list of 13 computational kernels that are believed to be the “most important algorithms” to implement efficiently on the computers of tomorrow.

  12. Philippe Beaudoin says:

    I’ve extended Koutschan’s original question to Stack Overflow, go vote for it before it’s driven to the ground. 😉

  13. Philippe Beaudoin says:
  14. Philippe Beaudoin says:

    Closed already… I guess Stack Overflow wasn’t the right forum for that. Unfortunate, though, as they have an excellent voting system. (Feel free to delete that comment and the 2 preceding.)

  15. @Beaudoin It looks like they closed your question on the following grounds:

    questions of this type (…) usually lead to confrontation and argument.

    I think that confrontation is a bit harsh.

  16. Leon Palafox says:

    I would say that deppending on people is the algorithms they would deem most important.

    And I agree, some of them are not even an algorithm: Solving a system of linear equations, is by no means an algorithm.

    It is a cool problem.

  17. Felix Gremmer says:

    1. Matrix Decompostion Algo e.g. LU-Decomposition
    2. Krylov subspace iteration methods.
    3. Monte Carlo method.
    4. Fast Multipole Methods
    5. Quicksort

    I have picked out of this list:
    http://amath.colorado.edu/resources/archive/topten.pdf

    Everything on the list is an algo. when you try to use it…

  18. Gasp says:

    Hi, nice article.
    Do you have some nice, really understandable article about FFT (Fourier)?

  19. Craig says:

    Edmonds matching algorithm

  20. peter says:

    I think the question needs to be qualified a bit. Important is a vague word. Important to whom, and in what context? If I am stranded on a deserted island somewhere, an algorithm for making fire is more important than binary search. However, if have only 60 seconds to find a name in a giant phone book, in order to prevent a bomb from going off, then binary search is obviously key.

  21. John Rawlins says:

    “BEYOND SVD”

    “Advanced Eigenvalue Vector Decomposition”

    We have made a significant improvement to the Singular Value
    Decomposition methodology, This is actually an understatement.

    We have discovered that the current Singular Value Decomposition mathematical techniques and resulting FORTRAN and C code is quite slow and inaccurate and what we have accomplished is to speed up computer execution by more than a factor of 1000 and to improve numerical accuracy by about 12 bits out of 48 bits for the standard double mantissa That is to say that there is no more than 1 bit different between the exact answer and my new solution method .Previous or current methods can barely achieve 24 bit accuracy (out of48).This improvement will make it possible to recognize red, green , blue, black, grey and white infrared images in real time as the first application .

    The range of applications for this new technology can go well
    beyond this.

    Of course, we expect skeptics about these claims, but we have demo
    programs that will read your data and produce results that they can compare and we can even executed this new code on their computers if they desire.

    How AEVD Improves SVD Performance

    AEVD Technology, LLC offers a fully developed, advanced form of the Singular Value Decomposition (SVD) algorithm, which offers a generational advance in speed and accuracy. Our Advanced Eigenvalue Vector Decomposition (or AEVD) was built upon a recognition of the shortcomings in how computers manipulate numbers, data and calculations, and reflects a painstaking analysis of the incremental steps in SVD processing to provide a faster process with fewer steps and improved inaccuracy.

    The SVD mathematical proposition is linearly independent, in an algebraic sense, of the similarity theorem and as such provides a variety of available solution paths. One such path is to first reduce the input array to a real bidiagonal matrix with a sequence of intermediate left and right unitary transformations. This reduction to a real bidiagonal matrix is usually chosen to be a real diagonal and having one real super diagonal. All of the remaining matrix elements are numerically considered as zero. It is possible to choose other reduced forms of the input matrix, but the use of a real bidiagonal array provides for the most numerically accurate and computationally rapid solution. Additional numerical stability and computer accuracy is obtained by AEVD by choosing unitary transformations that place the larger bidiagonal elements in the upper left and the smaller elements in the lower right positions. This is true since the final determination of the left and right transformations and the SVD weights are always an iterative process for matrix sizes greater than four. Even for matrix sizes of four and less iterative methods are usually simpler and require computational steps comparable to closed form algebraic solutions. Also, when a real bidiagonal array format is chosen as the final iterate, the left and right transformations employed during iteration are orthogonal matrices. Other SVD iterative solution methods available employ orthogonal variations such as Jacobi rotation, Givens, and Householder reduction algorithms. Consistent among the more efficient and accurate SVD algorithms is the choice of the real

    Regards,
    John Rawlins

    678-776-1343