, 9 min read
Effective compression using frame-of-reference and delta coding
12 thoughts on “Effective compression using frame-of-reference and delta coding”
, 9 min read
12 thoughts on “Effective compression using frame-of-reference and delta coding”
I would also recommend a a link to http://www.springerlink.com/content/j66851228120170t/
Delta and Golomb codes are kinda slow to be used in a high-throughput search engine (though they definitely make sense in other apps). Byte and word-aligned codes work much faster in this case.
@Itman
Golomb-Rice is definitively a bit slow for high-throughput applications. I am not sure that it is true for Yan et al.’s NewPFD. According to Delbru et al., it is just as fast, and even faster, than word-aligned codes (S9-64):
R. Delbru, S. Campinas, K. Samp Giovanni Tummarello, Adaptive frame of reference for compressing inverted lists, DERI Technical Report 2010-12-16, 2010.
Delbru also posted a comparison on the Lucene mailing list.
Cool, I am certainly going to check it out. With respect to the Golomb and Delta codes, by slow I mean really slow. Even the not-so-efficient VBC is twice is fast as Golomb.
@Itman
I agree regarding Golomb-Rice coding, and I believe that’s because it introduces a lot of branching. But the Delbru reference hints that delta coding can be quite fast when properly implemented. Moreover, szip which uses delta coding can be quite a bit faster than gzip during compression.
Why do you think that delta coding is necessarily slow? For example, if I notice that within blocks successive xors are small integers, and I pack them, this ought to be very fast (no branching, and only very fast operations). In fact, I am pretty sure I can implement it using no more than 1 CPU cycle per value on average with pipelining, and maybe less. Of course, it may compress poorly, but that’s another story.
From my experience: just re-ran a test on my laptop. Well, of course, I cannot guarantee that this holds true for all implementations and platforms. New architectures have amazing surprises like
1) Unfolding loops does not help
2) Aligned reading is as good as unaligned
Anyways, for my implementations the differences between VBC and Delta is not even two-fold, it is 10+-fold. Perhaps, I should rexamine my code some day.
@Itman
Would you share your code with me? I’d like to examine it. (I don’t need a license to it, I just would like to see exactly what you are comparing.)
Sure, no problem. I will it do it next week, because I want to do a quick check myself.
Just ran into this page after googling “delta compression xor”.
It seems to me the hoops to jump through in the unsorted case for delta encoding (negative deltas) go away if you encode your deltas using base -2 . Then, all you need is for the absolute value of the differences to be small. There are then tricks to compute using the -2 encoded numbers directly.
Can you elaborate?
What I mean is, related to the problem with coding up the -1,
You offered three solutions: xor rather than delta which is good except in cases where the hamming distance is large (even though the delta is small), a second solution that I didn’t fully follow and one where “we add a pointer to the second last difference to indicate that we are missing 5 bits (11111).”.
Another approach is the following: Using base -2 to encode the deltas. Hacker’s delight explains base -2 better than I can. But roughly: a number b_k, … , b1 b0 corresponds to b0 – 2 (b1) + 4 (b2) – 8 (b3) …. + (-2)^k (b_k). In practice, this means 0 maps to ‘0’, 1 maps to ‘1’, -1 maps to ’11’, 2 maps to ‘110’, -2 maps to ’10’, 3 maps to ‘111’. The property I care about in this case is that numbers with small absolute values will have small numbers non-zero bits. A positive number may be penalized in this case since it requires slightly more bits.
I haven’t done a full analysis of when this will be a win over the other methods (how many bits does it actually take for your sequence, how much does encoding/ decoding cost), but there are tricks to operate add the numbers without fully decoding them. It was just exciting because it looks like a possible application of this encoding 😉
I have updated the blog post with zig-zag encoding. I think it is closely related to your concept of base -2.
how it(delta compression) works on sensor received data, which is a 16 bit continuous data’s . The standard delta encoding doesnt work, because i have random values. ?