I think the issue is essentially relative scaling.
If you can solve an exponential problem, eg 2^n, with n bits, adding a single bit will double the required time, adding 10 bits will x1000, etc. So adding a few bits will make it intractable quite fast.
In linear time, adding a single bit will _add_ constant time. So if you could do it with n bit, you probably can do it with n+1 or n+10 bits.
So of course your example holds: there are some polynomial algorithms that won’t run for high values of n. But as computing power and capacity increases, the situation is a lot more favourable with those. If you double you computing power, this will _double_ the size of problems you solve in linear time; you gain 19% with O(n^4), and 0.6% with O(n^120). In O(2^n) you will gain 1 bit…
Hope this makes sense.
Andre Vellinosays:
Well, to use your analogy – do advances in General Relativity have any consequences for mining minerals on the moon? Maybe not – as far as we can tell right now. The point is – we don’t actually know.
Suppose your N^4 algorithm could be parallelized and we harnessed 30,000 tera-flop computers to solve your problem, then your formerly impractical algorithm becomes feasible.
I think that’s the point with knowing whether P=NP. It is true that there are some exp-time algorithms (e.g. Simplex for solving LP problems) which are nevertheless practical and some polynomial-time algorithms (e.g. Kamarkar’s algorithm for the same LP problems) which are *impractical* because of the size of the constants in the exponent.
One valuable thing about knowing that LP is in the class P is that it at least offers the hope that there exists a feasible solution for large N, whereas knowing that it doesn’t (and not knowing whether P=NP) sustains the hope that there is *no* feasible solution.
Daniel,
I think the issue is essentially relative scaling.
If you can solve an exponential problem, eg 2^n, with n bits, adding a single bit will double the required time, adding 10 bits will x1000, etc. So adding a few bits will make it intractable quite fast.
In linear time, adding a single bit will _add_ constant time. So if you could do it with n bit, you probably can do it with n+1 or n+10 bits.
So of course your example holds: there are some polynomial algorithms that won’t run for high values of n. But as computing power and capacity increases, the situation is a lot more favourable with those. If you double you computing power, this will _double_ the size of problems you solve in linear time; you gain 19% with O(n^4), and 0.6% with O(n^120). In O(2^n) you will gain 1 bit…
Hope this makes sense.
Well, to use your analogy – do advances in General Relativity have any consequences for mining minerals on the moon? Maybe not – as far as we can tell right now. The point is – we don’t actually know.
Suppose your N^4 algorithm could be parallelized and we harnessed 30,000 tera-flop computers to solve your problem, then your formerly impractical algorithm becomes feasible.
I think that’s the point with knowing whether P=NP. It is true that there are some exp-time algorithms (e.g. Simplex for solving LP problems) which are nevertheless practical and some polynomial-time algorithms (e.g. Kamarkar’s algorithm for the same LP problems) which are *impractical* because of the size of the constants in the exponent.
One valuable thing about knowing that LP is in the class P is that it at least offers the hope that there exists a feasible solution for large N, whereas knowing that it doesn’t (and not knowing whether P=NP) sustains the hope that there is *no* feasible solution.