My favourites in the original list and in your subset are hashing and SVD. But both of these are perhaps better described as data structures or representations, rather than algorithms. Linked lists (for sparse matrices, trees, graphs, etc.) are another very useful invention of this type. This is a rather subjective question, but perhaps data structures (representations) are more important (interesting, fundamental) than algorithms? Of course, a data structure isn’t much use without an algorithm, and vice versa.
Peter Turneysays:
Another example: floating point numbers. We take them for granted, but there is a lot of engineering behind IEEE 754.
Some of his algorithms are not specific algorithms but classes, sometimes enormous classes, of algorithms. For example, methods for solving linear systems of equations.
The simplex method is important historically, but I assume it is on the list as a placeholder for the set of convex optimization methods that have replaced it in practice.
I’d definitely put methods for solving linear systems in the top five. These algorithms are important in their own right, but they’re also part of other algorithms, such as convex optimization.
– stack-based algorithms, such as the use of a call-stack in (almost every) modern programming language
One other note: I’ve always been struck by the paucity of real uses of binary search, especially as compared to things like linear search, stacks, or hashing. The fact that it only works on perfectly sorted data makes it nearly useless in practice — too beautiful for this ugly world, as the poet might say. It’s mainly a nice introduction to the idea of divide-and-conquer.
A couple of quick remarks…
1) the simplex algorithm is still widely used. Despite the better worst-case error bound, interior point methods perform better only on some classes of problems. AFAIK it is still an open issue to find a heuristic saying when it is better to use one rather than the other.
2) On the contrary, Strassen’s algorithm is of theoretical interest, but AFAIK mostly unused in practice due to its numerical instability.
Oh, and another interesting view on the subject: here http://view.eecs.berkeley.edu/wiki/Dwarfs is a list of 13 computational kernels that are believed to be the “most important algorithms” to implement efficiently on the computers of tomorrow.
Philippe Beaudoinsays:
I’ve extended Koutschan’s original question to Stack Overflow, go vote for it before it’s driven to the ground. 😉
Closed already… I guess Stack Overflow wasn’t the right forum for that. Unfortunate, though, as they have an excellent voting system. (Feel free to delete that comment and the 2 preceding.)
Everything on the list is an algo. when you try to use it…
Gaspsays:
Hi, nice article.
Do you have some nice, really understandable article about FFT (Fourier)?
Craigsays:
Edmonds matching algorithm
petersays:
I think the question needs to be qualified a bit. Important is a vague word. Important to whom, and in what context? If I am stranded on a deserted island somewhere, an algorithm for making fire is more important than binary search. However, if have only 60 seconds to find a name in a giant phone book, in order to prevent a bomb from going off, then binary search is obviously key.
John Rawlinssays:
“BEYOND SVD”
“Advanced Eigenvalue Vector Decomposition”
We have made a significant improvement to the Singular Value
Decomposition methodology, This is actually an understatement.
We have discovered that the current Singular Value Decomposition mathematical techniques and resulting FORTRAN and C code is quite slow and inaccurate and what we have accomplished is to speed up computer execution by more than a factor of 1000 and to improve numerical accuracy by about 12 bits out of 48 bits for the standard double mantissa That is to say that there is no more than 1 bit different between the exact answer and my new solution method .Previous or current methods can barely achieve 24 bit accuracy (out of48).This improvement will make it possible to recognize red, green , blue, black, grey and white infrared images in real time as the first application .
The range of applications for this new technology can go well
beyond this.
Of course, we expect skeptics about these claims, but we have demo
programs that will read your data and produce results that they can compare and we can even executed this new code on their computers if they desire.
How AEVD Improves SVD Performance
AEVD Technology, LLC offers a fully developed, advanced form of the Singular Value Decomposition (SVD) algorithm, which offers a generational advance in speed and accuracy. Our Advanced Eigenvalue Vector Decomposition (or AEVD) was built upon a recognition of the shortcomings in how computers manipulate numbers, data and calculations, and reflects a painstaking analysis of the incremental steps in SVD processing to provide a faster process with fewer steps and improved inaccuracy.
The SVD mathematical proposition is linearly independent, in an algebraic sense, of the similarity theorem and as such provides a variety of available solution paths. One such path is to first reduce the input array to a real bidiagonal matrix with a sequence of intermediate left and right unitary transformations. This reduction to a real bidiagonal matrix is usually chosen to be a real diagonal and having one real super diagonal. All of the remaining matrix elements are numerically considered as zero. It is possible to choose other reduced forms of the input matrix, but the use of a real bidiagonal array provides for the most numerically accurate and computationally rapid solution. Additional numerical stability and computer accuracy is obtained by AEVD by choosing unitary transformations that place the larger bidiagonal elements in the upper left and the smaller elements in the lower right positions. This is true since the final determination of the left and right transformations and the SVD weights are always an iterative process for matrix sizes greater than four. Even for matrix sizes of four and less iterative methods are usually simpler and require computational steps comparable to closed form algebraic solutions. Also, when a real bidiagonal array format is chosen as the final iterate, the left and right transformations employed during iteration are orthogonal matrices. Other SVD iterative solution methods available employ orthogonal variations such as Jacobi rotation, Givens, and Householder reduction algorithms. Consistent among the more efficient and accurate SVD algorithms is the choice of the real
My favourites in the original list and in your subset are hashing and SVD. But both of these are perhaps better described as data structures or representations, rather than algorithms. Linked lists (for sparse matrices, trees, graphs, etc.) are another very useful invention of this type. This is a rather subjective question, but perhaps data structures (representations) are more important (interesting, fundamental) than algorithms? Of course, a data structure isn’t much use without an algorithm, and vice versa.
Another example: floating point numbers. We take them for granted, but there is a lot of engineering behind IEEE 754.
Some of his algorithms are not specific algorithms but classes, sometimes enormous classes, of algorithms. For example, methods for solving linear systems of equations.
The simplex method is important historically, but I assume it is on the list as a placeholder for the set of convex optimization methods that have replaced it in practice.
I’d definitely put methods for solving linear systems in the top five. These algorithms are important in their own right, but they’re also part of other algorithms, such as convex optimization.
Would you consider the Expectation Maximization and Metropolis-Hasting to be algorithms? (Even though not invented within computer science?)
I agree with your list. Although I want to point out my most recent “ahah!” moment: the Wheeler-Burrows transform. http://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform
Like you, I didn’t learn SVD until grad school. The best source I found for learning it on one’s own is: http://www.uwlax.edu/faculty/will/svd/index.html
As far as such good learn-it-by-yourself-doc goes, thumbs up also to Jonathan Richard Shewchuk for his introduction to the conjugate gradient method:
http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf
My choices are:
1. Quicksort (though I find it ugly and difficult to explain to someone 🙁 )
2. Dijkstra’s algorithm (Shortest path)
3. Euclidean algorithm for GCD
4. Strassen’s algorithm for Matrix multiplication
Good list but I think the RSA Encryption Algo and the Diffie Hellman Key exchange do need a mention 🙂
I would add two more to my list, the data compression and the viterbi algo 🙂
Great list.
I would add Quicksort, Public-Key Cryptography and LZ Compression algorithms to the list.
Simple gradient descent manybe? After all it is the basis of many first and second order optimization methods
I would rate the following algorithm families as more important than all of the algorithms on Koutschan’s list:
– linear search: I think this is the single most *important* algorithm
– numerical addition, subtraction, multiplication, division
– random number generating algorithms
– stack-based algorithms, such as the use of a call-stack in (almost every) modern programming language
One other note: I’ve always been struck by the paucity of real uses of binary search, especially as compared to things like linear search, stacks, or hashing. The fact that it only works on perfectly sorted data makes it nearly useless in practice — too beautiful for this ugly world, as the poet might say. It’s mainly a nice introduction to the idea of divide-and-conquer.
A couple of quick remarks…
1) the simplex algorithm is still widely used. Despite the better worst-case error bound, interior point methods perform better only on some classes of problems. AFAIK it is still an open issue to find a heuristic saying when it is better to use one rather than the other.
2) On the contrary, Strassen’s algorithm is of theoretical interest, but AFAIK mostly unused in practice due to its numerical instability.
Oh, and another interesting view on the subject: here http://view.eecs.berkeley.edu/wiki/Dwarfs is a list of 13 computational kernels that are believed to be the “most important algorithms” to implement efficiently on the computers of tomorrow.
I’ve extended Koutschan’s original question to Stack Overflow, go vote for it before it’s driven to the ground. 😉
Forgot the link:
http://stackoverflow.com/questions/3188614/what-are-the-most-important-algorithms
Closed already… I guess Stack Overflow wasn’t the right forum for that. Unfortunate, though, as they have an excellent voting system. (Feel free to delete that comment and the 2 preceding.)
@Beaudoin It looks like they closed your question on the following grounds:
questions of this type (…) usually lead to confrontation and argument.
I think that confrontation is a bit harsh.
I would say that deppending on people is the algorithms they would deem most important.
And I agree, some of them are not even an algorithm: Solving a system of linear equations, is by no means an algorithm.
It is a cool problem.
1. Matrix Decompostion Algo e.g. LU-Decomposition
2. Krylov subspace iteration methods.
3. Monte Carlo method.
4. Fast Multipole Methods
5. Quicksort
I have picked out of this list:
http://amath.colorado.edu/resources/archive/topten.pdf
Everything on the list is an algo. when you try to use it…
Hi, nice article.
Do you have some nice, really understandable article about FFT (Fourier)?
Edmonds matching algorithm
I think the question needs to be qualified a bit. Important is a vague word. Important to whom, and in what context? If I am stranded on a deserted island somewhere, an algorithm for making fire is more important than binary search. However, if have only 60 seconds to find a name in a giant phone book, in order to prevent a bomb from going off, then binary search is obviously key.
“BEYOND SVD”
“Advanced Eigenvalue Vector Decomposition”
We have made a significant improvement to the Singular Value
Decomposition methodology, This is actually an understatement.
We have discovered that the current Singular Value Decomposition mathematical techniques and resulting FORTRAN and C code is quite slow and inaccurate and what we have accomplished is to speed up computer execution by more than a factor of 1000 and to improve numerical accuracy by about 12 bits out of 48 bits for the standard double mantissa That is to say that there is no more than 1 bit different between the exact answer and my new solution method .Previous or current methods can barely achieve 24 bit accuracy (out of48).This improvement will make it possible to recognize red, green , blue, black, grey and white infrared images in real time as the first application .
The range of applications for this new technology can go well
beyond this.
Of course, we expect skeptics about these claims, but we have demo
programs that will read your data and produce results that they can compare and we can even executed this new code on their computers if they desire.
How AEVD Improves SVD Performance
AEVD Technology, LLC offers a fully developed, advanced form of the Singular Value Decomposition (SVD) algorithm, which offers a generational advance in speed and accuracy. Our Advanced Eigenvalue Vector Decomposition (or AEVD) was built upon a recognition of the shortcomings in how computers manipulate numbers, data and calculations, and reflects a painstaking analysis of the incremental steps in SVD processing to provide a faster process with fewer steps and improved inaccuracy.
The SVD mathematical proposition is linearly independent, in an algebraic sense, of the similarity theorem and as such provides a variety of available solution paths. One such path is to first reduce the input array to a real bidiagonal matrix with a sequence of intermediate left and right unitary transformations. This reduction to a real bidiagonal matrix is usually chosen to be a real diagonal and having one real super diagonal. All of the remaining matrix elements are numerically considered as zero. It is possible to choose other reduced forms of the input matrix, but the use of a real bidiagonal array provides for the most numerically accurate and computationally rapid solution. Additional numerical stability and computer accuracy is obtained by AEVD by choosing unitary transformations that place the larger bidiagonal elements in the upper left and the smaller elements in the lower right positions. This is true since the final determination of the left and right transformations and the SVD weights are always an iterative process for matrix sizes greater than four. Even for matrix sizes of four and less iterative methods are usually simpler and require computational steps comparable to closed form algebraic solutions. Also, when a real bidiagonal array format is chosen as the final iterate, the left and right transformations employed during iteration are orthogonal matrices. Other SVD iterative solution methods available employ orthogonal variations such as Jacobi rotation, Givens, and Householder reduction algorithms. Consistent among the more efficient and accurate SVD algorithms is the choice of the real
Regards,
John Rawlins
678-776-1343