Daniel Lemire's blog

, 5 min read

On feeding your CPU with data

8 thoughts on “On feeding your CPU with data”

  1. I’m not sure if it is true or not, but mainframes are often attributed with having much bigger pipes (to move data around).

    Way back, I was working in C and a friend was in APL on a huge main-frame, we’d write the same cpu intensive algorithms, then compare performance. I never kept track of the numbers, but it was clear that a time-slice off of my friend’s machine was nearly equivalent to the speed of my workstation (which was state-of-the-art then) for smaller jobs, but for big bulk jobs his hardware was often stunningly faster.

    Somewhere in the beginning of the OO age, everything became optimized for one-offs, rather than for bulk processing. Usually when I’m optimizing code, the first thing I try is to deal with the data in bulk (followed by memoization) …


  2. wn says:

    Are you sure it is an issue of where the data resides and not a scheduling one?

  3. @wn

    What do you mean by scheduling? The tests run very fast and I pick the best out of several runs.

  4. KWillets says:

    RAM is another form of secondary storage, like disk used to be. Cache is now what RAM was conceived to be: a flat memory space with constant access time.

  5. wn says:

    How long do they run, on which OS, and at what priority? If the test process can be preempted by the OS, which is more likely to happen on longer runs (as with the large data arrays) then you might be measuring the switch contexts of processes without meaning to do so, and would probably want to eliminate that…

  6. @wn

    I prefix them with “nice -n -19” on a Linux box. Moreover, they take only a few seconds to run and involve no IO.

  7. Mike Stiber says:

    And things are even more complicated if you’re programming a GPU, which has an much more complicated memory architecture. Or, potentially, if you’re doing multithreaded coding on a multicore machine.

    Time to relearn computational complexity.

  8. Itman says:


    in regard to GPUs: what is the current main memory to/from GPU memory exchange rate?