Daniel Lemire's blog

, 12 min read

How fast is bit packing?

18 thoughts on “How fast is bit packing?”

  1. John Regehr says:

    I’m missing something… how is packing 32-bit integers into 17 bits a savings of 90%? It sounds closer to 50%.

  2. @John

    Well. I have that 32/17 – 1 is 90%. But I grant you that it is less confusing to say 50%, so I have updated my blog post accordingly.

  3. Jay Stein says:

    Please see my US patent no. 5,602,550, filed in 1995, granted in 1997, which describes a complete implementation of an adaptive compression utilizing bit packing, but also allowing for bit packing of deltas between successive values. This algorithm was built for speed.

  4. I’m missing something here. I was hoping to see a speed comparison between bit-packing and not bit-packing.

    Given an array of k-bit integers stored in 32-bit integers, how long does it take to copy that array? how long does it take to pack that array? how long does it take to unpack the packed array?

  5. @Patrick

    I’m missing something here. I was hoping to see a speed comparison between bit-packing and not bit-packing.

    You get the non-packed approach when bit is set to 32.

    Don’t forget that my source code is available (see link) so you can run your own tests if you want!

  6. Indeed. You even mention that in a part I skimmed through before. Thank you.

  7. Marsh Ray says:

    It would be relevant to know how many numbers are in the data set being packed or unpacked, and compare that to no packing at all. Cache effects are likely to dominate above various sizes.

    @Jay Stein – The only proper response to that is: (rude language censored by D. Lemire) go crawl back under the rock you came from software patenter.

  8. zav says:

    The first word of your article is spelled wrong.

    That’s when I stop reading.

  9. David says:

    What a shame, zav. Most compilers are sophisticated enough to continue parsing even in the presence of syntax errors.

    P.S. I think you meant “stopped,” not “stop.”

  10. @zav

    I fixed the typo. Thanks for reporting it.

  11. zav says:

    Thanks Dan. I’m sure I’ll love your article. Will check it out later on today.

    Cheers.

  12. Jay Stein says:

    @Marsh Ray – My compression algorithm was patented by the company where I was employed at the time. I did not think it was worth wasting anyone’s time explaining that detail. The patent application is a publicly available explanation of the algorithm, which is relevant to the current discussion, unlike your trolling.

  13. zav says:

    Jay, would love to check out your patent. I’ve been fascinated with the potential for this since 1995 while investigating systems and methods for storing quantized delta frames in video streams. None of my PAs are as fundamental.

    David, this is nice. Wish I had time to play with this at the moment. Thanks for the source and the correction. Cheers.

  14. Michele Filannino says:

    Hi Daniel,

    this is my graph:
    http://dl.dropbox.com/u/265383/bit_packing.png

    It seems the opposite of that one showed in the post. What do you think?

    Bye,
    michele.

  15. @michele

    Interesting. Can you give me some details, like processor type, compiler and so on?

  16. Itman says:

    Michele,

    It is not quite the opposite. The trend is the same:
    1) There is very little difference between unpacked and packed readings
    2) Some packed reads are more (though only slightly) efficient than unpacked ones.

  17. @itman @michele

    If you look closely at my code, you’ll notice that I use a lot of loops that can and should probably be unrolled. I actually leave them rolled when it makes sense so that the compiler has more options (compilers don’t typically “roll back” loops that were manually unrolled).

    Anyhow. I adjusted the code until it looked like I got optimal results with GCC 4.6 and my particular hardware. Because Michele is using GCC 4.2, I am not surprised that the results differ.

    However, even with GCC 4.2, it might be possible tweak the results with the proper optimization flags.

    As you say @itman, the results are not really all that different. But it is nice to see independent tests.

  18. Frederico Schardong says:

    Very nice post!

    I’m implementing the same idea, in C and in fewer lines. The code is here: pastebin.com/SfEkqKnv

    Please take a look and if you want to help me I’ll appreciate that. 🙂

    I’m having errors.. eg packing at 32 int variable numbers less than 17 are fine but when its greater than 16 it doesn’t work well… I don’t know what’s the problem, will appreciate any help.

    frede dot sch at gmail dot com