Daniel Lemire's blog

, 11 min read

The fundamental properties of computing

14 thoughts on “The fundamental properties of computing”

  1. @Geoff Absolutely. And it might be hard to define latency as well. But the mere fact that it is hard to define these quantities should not prevent us from thinking about them as being fundamental concepts (if not properties).

  2. @Geoff

    Your question is entirely valid. However, the concept of the “running time” of algorithm is also somewhat fuzzy. It has become the subject of an entire subfield of Computer Science called Complexity Theory.

    I think that it always comes down to (somewhat arbitrary) models and representations. We know that these models are useful because we implement algorithm in software and realize that the complexity of the algorithm is, indeed, closely related to the performance of the software. However, the match is not perfect.

    Maybe I exaggerate when I try to compare “running time” or “latency” with mass or momentum. I do not know yet.

  3. If correctness is a fundamental property then you should probably define it. I suspect this is harder than it sounds. Does it means it passes a set of tests? Does it have a specific mathematical relation that it must satisfy? Do the results make the customer happy? They are not synonymous nor are they mutually exclusive.

    Clearly there are degrees of correctness, but it is a more relativistic measure than resource usage.

  4. @Daniel Certainly not and I didn’t mean to suggest otherwise.

    I do wonder, however, if a property is found to be very difficult to define whether or not it should even be considered a fundamental property. Or perhaps it means what the property is pertaining to — in this case algorithms — is ill-defined.

    Interesting questions to ponder.

  5. @Guy Excellent point. Yes, definitively.

  6. Chris Gray says:

    I have to say – the three measures that you’re identifying sound a great deal like the traditional engineering triangle: good, fast, and/or cheap. You can make it as good as you want (correctness) but only at the expense of either speed or cost (memory/cycles/etc).

  7. Michael Mitzenmacher says:

    @Geoff: A standard way to define “correctness” to give a mathematically pure definition is generally via an approximation ratio: I can guarantee an answer within a factor of 2 of the optimal. In the case of randomized algorithms, another standard approach is a probabilistic statement: I get the correct answer with high probability (or probability at least x for whatever x is convenient).

    So, I disagree with your assertion that correctness needs to be a “more relativistic measure” than running time. There are standard ways of examining it in the theoretical literature; one can certainly imagine many variations (and people do), but this is also true for measures like running time.

    I’ve found that students are initially startled by the idea that correctness can be just another performance measure, like memory usage and computation time. It’s a challenging and important philosophical idea, well worth introducing in this form to undergraduates.

  8. Alan Fekete says:

    Peter Denning runs a “Great Principles of computing” project. http://cs.gmu.edu/cne/pjd/GP/GP-site/welcome.html

  9. Guy T. says:

    If you want to deal with parallel algorithms, you might also consider work and cost in addition to time, where cost = time * nb processors and work = total amount of operations done over all processors.

    In the sequential world, the relationship between these three are straightforward (since nb processors = 1):
    time = work = cost

    However, this is not so in the parallel world, where you only have:
    time <= work <= cost

    And you also have possible tradeoff: using less processors with a different algorithm may reduce the cost but may increase the time — or not, in which case it might be considered "optimal" relative to the sequential version (if the parallel has the same cost as the sequential with an improved time.

  10. jeremy says:

    For these fundamental properties, isn’t it important to draw a distinction between “computing” and “human computing”?

    Speed vs. memory — that seems like a computer-only fundamental property.

    But page rendering / latency seems more like a human perception issue. Same with correctness, to a certain extent.

  11. @Zeno

    There is a relation between latency and running time, but they are not the same thing.

    Consider the rendering of HTML pages. Some browsers begin the rendering faster than others, at the cost of an larger overall running time.

  12. Zeno Gantner says:

    Interesting post & discussion.

    However, I don’t think that latency is as fundamental as running time, memory usage, and correctness.

    If latency is the delay between the input and the output, then it is also something that we should file under “running time”.

  13. Paul says:

    I think correctness is most interesting when considered as an orthogonal dimension to other properties. Then latency is d(running time) / d(correctness), that is, the rate correctness changes as a function of time. For a traditional algorithm this would be undefined (you jump from 0 to 1 instantaneously), but for a loading webpage it might be linear. Similiarly, approximate algorithms often give a guaranteed correctness relative to the amount of memory available. At some quantity of memory and time the algorithm might be exact, but it can degrade better as memory declines.

    Thanks for prompting this question, it’s interesting to think about modelling algorithm performance a multi-dimensional shape.

  14. Shouldn’t there be a distinction between “algorithm level properties” and “algorithm implementation level properties”? Those are hardly the same.