I am not saying that x % N is equal to (x * N) div 2^32 . This is obviously false, as you indicate. I am saying that they are both fair maps from the set of all 32-bit integers down to integers in [0,N).
If you have, say, 31-bit integers, instead of 32-bit integers, as might happen with a call to rand, you can adapt the map by shifting by 31 instead of 32.
PS: Especially, if you use it for hashing and you numbers tend to be small then all your hash values will be horribly biased down. Perhaps, reducing div 2^32 to div 2^(some smaller degree of tw) may help.
A good hash function should be regular. That is, it should be so that all integers are “equally likely” (given a random input). If your hash values do not cover the whole 32-bit range, you need to adapt the map, probably as you describe.
What is the cost of computing a good hash value? 🙂 Perhaps, it should be included in the overall computation time. This may (drastically?) reduce the difference between two approaches.
I also suspect that in many cases, given a list of IDs you can get reasonable results by doing just ID % . In this case, you can get away without applying a hashing transformation. With your trick, you do need this.
For example, if you numbers are smaller than 2^20 (which is quite likely) and N < 2^16, then it looks like all your hash values will be zero.
Of course, you are correct that there are many cases where the modulo reduction will not be a bottleneck. In such cases, this fast map is useless.
However, there are real-world examples where people set their capacity to a power of two to improve performance. A latency of a couple of dozens of cycles (for a division) is not great.
I also suspect that in many cases, given a list of IDs you can get reasonable results by doing just ID % N.
Yes. Moreover, as alluded to in my post, if N is known ahead of time, you can avoid the modulo reduction entirely and replace it with a faster function. See Hacker’s Delight.
However, there are real-world examples where people set their capacity to a power of two to improve performance.
You mean: ID % (2^N)? If so, chance the compiler turns this expression into ID & ((2^N) – 1) which is even faster than the fast alternative with a multiplication following a shift, isn’t it?
On a second thought, if modulo reduction slows you down, you may want to try SIMD and bulk conversion. You can something like:
__m128 recip = __mm_set1_ps(1/float(N));
__m128 mult = __mm_set1_ps(N);
__m128 to_convert = __mm_loadu_ps(…);
__m128 tmp = _mm_mul_ps(mult, _mm_floor_ps(_mm_mul_ps(to_convert_,recip)));
__m128i res = _mm_cvtps_epi32(_mm_floor_ps(_mm_sub_ps(to_convert, tmp)));
Peraphs, an additional SIMD min is required to ensure the value is < N. Anyways, seems like it is worth trying such as solution.
Yes, Leonid, I suspect that you are quite right. There are cases where using floating-point numbers, especially in conjunction with SIMD instructions, could lead to great results.
moonchildsays:
As long as your reciprocal is rounded down (not hard to ensure when you create it; can decrement if desirable), you don’t need a min, because float math is monotonic. However, you now only get 24 bits of (reduced) hash, which is not that much.
We want to generate an index (I) chosen uniformly from the range 0..N-1.
We have a 32-bit random value (R), and assume a uniform distribution.
When N is a power-of-two, then the good/easy/fair answer: mask off the higher bits of R. (I have used this more than a few times, as a programmer.)
When N is not a power-of-two, the easy answer is:
I = R mod N
Two problems: the modulus operation is somewhat expensive, and the distribution is not *quite* even (that last wrap-around) – though likely good enough.
Daniel’s solution is (in two steps):
R = R * N
We now have a value in the range to N * 2^32.
I = R >> 32
We now have a value in the range to N. The two operations (multiply and shift) are almost certainly cheaper than the modulus operation.
I suspect the values are not *quite* uniform, but likely good enough. (How uniformly distributed are the multiplication products in each of the N buckets?)
If N is small compared to 2^32, then I submit to you that you won’t be able to measure any bias.
KWilletssays:
It’s a fairly simple proof. You stretch the [0, 2^32) random variable range out to multiples of N in [0, N*2^32] and then map [0,2^32) to 0, [2^32, 2*2^32) to 1, etc. by integer division (>>32). The number of multiples of N in each range is floor or ceil(2^32/N).
Thanks. I had a relatively short proof that used elementary number theory (a few lines), but it slightly got out of hand when I tried to make it accessible to people who may not master these concepts. A more direct approach like you suggest would have been better! Kudos!
I have updated my blog post with a more direct proof.
KWilletssays:
Thanks — I actually thought it through hoping to find a different method, and ended up with a proof of the same one. Oh well.
Maxim Egorushkinsays:
Note that expression `((uint64_t) x * (uint64_t) N) >> 32` on x86-64 compiles into a 64-bit multiplication that yields a 128-bit result, followed by the shift.
With a bit of inline assembly it can use a shorter 32-bit mul instruction that yields a 64-bit result in EDX:EAX. The required result is in EDX, no shift instruction is required.
Doing this for floats (random within [0,x)) is also fairly easy since it just requires subtracting 32 from the exponent, which ldexp can do, eg ldexp(rand(), -32) gives a float in [0,1) (assuming rand() is 32-bit here).
There’s some loss of precision as floats discard lower-order bits automatically, unless we cast to a longer mantissa first.
Pablosays:
Since I don’t read math proofs daily, this took me a bit of thinking to parse. At the end I realized all this was a variation of:
Yes, for some fuzzy notion of “equivalent”. For example, rand(2^32) / 2^32 is not equivalent to picking a number in [0,1) fairly.
Chad Harringtonsays:
This is a godsend for FPGAs. Modulus by an arbitrary number is expensive in both time and space in hardware. Shifts are free and nearly all modern FPGAs have hard multiplier blocks. This is a perfect solution. Thanks !
I created a similar approach to fast range named “fast mod and scale”. I didn’t realize fast range until I started to survey alternatives for writing my blog post. http://www.idryman.org/blog/2017/05/03/writing-a-damn-fast-hash-table-with-tiny-memory-footprints/
I’m happy to see other people also interested in this problem. Even though solution is just one or two lines are code, it is still a important problem to solve!
It first mask the hash value to next power of two, then do the fast range (I named it scaling) described in your post. It cost a bit more cycles, but are small enough because of modern cpu pipelining. The major usage of fast mod and scale (my method) is to make hash table probing as easy to implement as using strait mod. For hash tables that doesn’t use probing, like cuckoo hashing or standard chaining, fast range is sufficient.
More analysis and implementation details are in the blog I posted above.
Joost VandeVondelesays:
This also made it in the world’s strongest open source chess engine (stockfish) :
int main()
{
cout<< reduce(12, 7);
return 0;
}
I tried in my laptop (x64) and “www.onlinegdb.com” but i received the same result is zero. Did i miss something?
Isn’t this basically fixed point arithmetic where the bottom 32 bits
are the fractional part?
Basically, yes. It is really not hard conceptually.
Cyrilsays:
Unless x * N > (1<<shiftAmount) the result is 0. I don’t know how you are computing fairness but your post is misleading.
In fact, you’re returning the quotient of the product by 2 raised to power of shiftAmount => reduce = floor[(x*N) / 2^shiftAmount]
Since, in a hash table, you are unlikely to use a power of 2 for the bucket’s count, this is not going to work well, and you’ll get a lot of collision if you’re using a power of 2 for the divisor. Since N can not be in [0 ; powerOfTwoRange] this is useless.
If you know that N is in [0; 2^32] range, nice. But that’s condition that rarely happens in real life.
Let’s say you have a hash table to have a more compact storage of ID => Value, then this function will produce a huge amount of collision on the low bucket’s indexes and almost no collision on the last buckets.
Obviously, in benchmark where N are as probable to happen in [0 2^32] range, this method is as fair as a modulo. Yet, these rarely happens in reality and the cost to handle collisions in a hash table is many times more important than the cost of the modulo.
You’d probably get a better result by SWARing the product before dividing (that is, p = x * Np = p ^ (p>>32) and so on) then masking by 0xFFFFFFFF. But at some point, I doubt you’ll beat a modulo operation.
Note that the trick described in this blog post is used in many real-life systems, some of which you are maybe relying upon. It definitively works.
If you know that N is in [0; 2^32] range, nice. But that’s condition that rarely happens in real life.
You are expected to use this trick, like the modulo reduction, after hashing. You can hash to the [0, 2^32) range.
But at some point, I doubt you’ll beat a modulo operation
You need to hash your objects in all cases. You should never “just” use the modulo reduction. What you typically do is hash and then reduce. You can either reduce using the modulo reduction or using this trick.
Most hash functions, just like most random number generators, have output sizes that are powers of two.
L. C.says:
This works great! Thank you!
Piotr Grochowskisays:
Ok so what if the RNG output is 64-bit. Example, compiling xoshiro256** to a 32-bit executable, .
Piotr Grochowskisays:
So, you can’t do this method on 64-bit output on 32-bit platforms because there are no 128-bit integers on 32-bit platforms!
Yes, you can do this as well, though it is slightly more complicated. Please see
Faster Remainder by Direct Computation: Applications to Compilers and Software Libraries
Software: Practice and Experience 49 (6), 2019 https://arxiv.org/abs/1902.01961
Bosays:
Thanks. This is amazingly fast in my case of a bloomfilter probe function. Even faster than the SIMD implementation.
Bosays:
Correction. There was a bug in my implementation. The fast modulo is close to the performance of “power-of-two”. The SIMD (AVX512) method is still the fastest.
Pieter Wuillesays:
Hello Daniel,
Thank you for this technique; I have used it in a number of projects.
Today I was wondering about a generalization: say you have a hash output x (in range [0,232) like here), but want to extract two independent ranged values from it, with ranges N1 and N2 (where N1*N2 < 232). And it seems there is a very clean solution:
Effectively the multiplication x*N1 leaves us with a 64-bit number whose high bits are the first output, and the low bits are the remaining entropy – conveniently scaled to range [0,2**32) again, ready to be used for a second reduction.
It can be applied iteratively, though the quality of the extracted numbers will degrade as more entropy is extracted.
start with state = x
iterate over ranges N[i] of numbers to be extracted:
mask = ~N[i] & (N[i]-1)
tmp = state * range
output[i] = tmp >> 32
state = (state + (out & mask)) & 0xFFFFFFFF
This preserves all entropy; every iteration merely permutes the state. This means that all produced numbers are individually as uniformly distributed as extracting directly from x. Furthermore, it moves the “unused” portion of the entropy to the top of the state, so that it gets preferentially used in the next step. Some testing with small numbers seems to indicate that this actually produces optimal joint distributions of subsequently produced numbers (i.e. the distribution of output[k…k+j] is as well distributed as extracting a number with range N[k] * N[k+1] * … * [k+j] directly).
For N=255 you can also get rid of that multiplication by 255 – it can be replaced by an eight bit shift left and a subtraction of the original value.
A nice trick for N=255 is to note that 1/255 = 2^-8 + 2^-16 + 2^-24 + 2^-32 + … If you’re interested in handling 32-bit values and you’re on a 64-bit processor, you can get a lot of mileage out of shifts and adds built around this. I did one today that uses about 12 lines of x86_64 assembly, just shifts and add/subtract. The “wart” is that you do have to truncate that series somewhere, and consequently exact multiples of 255 will return 255, rather than 0. But one below goes to 254, and one above goes to 1, so it’s just a matter of catching the 255 results and replacing them with 0.
Try to multiply numbers that cover the full range of values.
Taras Tsugriisays:
Thanks for as always insightful article! This technique is mentioned in ["Efficient Hash Probes on Modern Processors"][1] but only in passing without a proof, analysis and implementation. I’d only suggest emphasizing a little more the fact that it’s not an alternative to `”x % N” as some may assume only to find out that for sequential identifiers with relatively small Ns they are getting 0s in production 🙂
I’m missing something… are you saying that x % N and (x * N) div 2^32 are equivalent? This is trivially false for, e.g., 1 % 7 vs (1 * 7) div 2^32.
I am not saying that x % N is equal to (x * N) div 2^32 . This is obviously false, as you indicate. I am saying that they are both fair maps from the set of all 32-bit integers down to integers in [0,N).
It is not, unless you have numbers uniformly distributed from 0 to numeric_limits::max()
If you have, say, 31-bit integers, instead of 32-bit integers, as might happen with a call to rand, you can adapt the map by shifting by 31 instead of 32.
PS: Especially, if you use it for hashing and you numbers tend to be small then all your hash values will be horribly biased down. Perhaps, reducing div 2^32 to div 2^(some smaller degree of tw) may help.
A good hash function should be regular. That is, it should be so that all integers are “equally likely” (given a random input). If your hash values do not cover the whole 32-bit range, you need to adapt the map, probably as you describe.
What is the cost of computing a good hash value? 🙂 Perhaps, it should be included in the overall computation time. This may (drastically?) reduce the difference between two approaches.
I also suspect that in many cases, given a list of IDs you can get reasonable results by doing just ID % . In this case, you can get away without applying a hashing transformation. With your trick, you do need this.
For example, if you numbers are smaller than 2^20 (which is quite likely) and N < 2^16, then it looks like all your hash values will be zero.
by doing just ID * some perhaps prime number. Angle brackets were removed by HTML 🙂
What is the cost of computing a good hash value?
Of course, you are correct that there are many cases where the modulo reduction will not be a bottleneck. In such cases, this fast map is useless.
However, there are real-world examples where people set their capacity to a power of two to improve performance. A latency of a couple of dozens of cycles (for a division) is not great.
I also suspect that in many cases, given a list of IDs you can get reasonable results by doing just ID % N.
Yes. Moreover, as alluded to in my post, if N is known ahead of time, you can avoid the modulo reduction entirely and replace it with a faster function. See Hacker’s Delight.
Agree. Good point about N known in advance.
You mean: ID % (2^N)? If so, chance the compiler turns this expression into ID & ((2^N) – 1) which is even faster than the fast alternative with a multiplication following a shift, isn’t it?
@hlide
Yes, though being limited by powers of two can be wasteful.
Would byte-swapping x help to preserve lower-order bits, e.g.:
uint32_t reduce2(uint32_t x, uint32_t N) {
return static_cast<uint64_t>(bswap_32(x)) * N >> 32;
}
On a second thought, if modulo reduction slows you down, you may want to try SIMD and bulk conversion. You can something like:
__m128 recip = __mm_set1_ps(1/float(N));
__m128 mult = __mm_set1_ps(N);
__m128 to_convert = __mm_loadu_ps(…);
__m128 tmp = _mm_mul_ps(mult, _mm_floor_ps(_mm_mul_ps(to_convert_,recip)));
__m128i res = _mm_cvtps_epi32(_mm_floor_ps(_mm_sub_ps(to_convert, tmp)));
Peraphs, an additional SIMD min is required to ensure the value is < N. Anyways, seems like it is worth trying such as solution.
Yes, Leonid, I suspect that you are quite right. There are cases where using floating-point numbers, especially in conjunction with SIMD instructions, could lead to great results.
As long as your reciprocal is rounded down (not hard to ensure when you create it; can decrement if desirable), you don’t need a min, because float math is monotonic. However, you now only get 24 bits of (reduced) hash, which is not that much.
To re-phase Daniel:
We want to generate an index (I) chosen uniformly from the range 0..N-1.
We have a 32-bit random value (R), and assume a uniform distribution.
When N is a power-of-two, then the good/easy/fair answer: mask off the higher bits of R. (I have used this more than a few times, as a programmer.)
When N is not a power-of-two, the easy answer is:
I = R mod N
Two problems: the modulus operation is somewhat expensive, and the distribution is not *quite* even (that last wrap-around) – though likely good enough.
Daniel’s solution is (in two steps):
R = R * N
We now have a value in the range to N * 2^32.
I = R >> 32
We now have a value in the range to N. The two operations (multiply and shift) are almost certainly cheaper than the modulus operation.
I suspect the values are not *quite* uniform, but likely good enough. (How uniformly distributed are the multiplication products in each of the N buckets?)
Might write a test program to measure. 🙂
If N is small compared to 2^32, then I submit to you that you won’t be able to measure any bias.
It’s a fairly simple proof. You stretch the [0, 2^32) random variable range out to multiples of N in [0, N*2^32] and then map [0,2^32) to 0, [2^32, 2*2^32) to 1, etc. by integer division (>>32). The number of multiples of N in each range is floor or ceil(2^32/N).
Thanks. I had a relatively short proof that used elementary number theory (a few lines), but it slightly got out of hand when I tried to make it accessible to people who may not master these concepts. A more direct approach like you suggest would have been better! Kudos!
I have updated my blog post with a more direct proof.
Thanks — I actually thought it through hoping to find a different method, and ended up with a proof of the same one. Oh well.
Note that expression `((uint64_t) x * (uint64_t) N) >> 32` on x86-64 compiles into a 64-bit multiplication that yields a 128-bit result, followed by the shift.
With a bit of inline assembly it can use a shorter 32-bit mul instruction that yields a 64-bit result in EDX:EAX. The required result is in EDX, no shift instruction is required.
That’s a very good point. I am somewhat disappointed that the compiler fails to exploit this optimization.
For the record, something like this could do the job…
uint32_t asm_highmult32to32(uint32_t u, uint32_t v) {
uint32_t answer;
__asm__ ("imull %[v]\n"
"movl %%edx,%[answer]\n"
: [answer] "+r" (answer) : [u] "a" (u), [v] "r" (v) :"eax","edx" );
return answer;
}
But it is unlikely to help performance as is. You’d probably need to rewrite a larger chunk of code using assembly.
For some reason, gcc does the correct thing with `uint128_t` operands.
See https://godbolt.org/g/kZ91Yu .
mulx instruction would be ideal here: takes 2 assembly instructions to produce the value in eax, no flags affected:
uint32_t reduce3(uint32_t x, uint32_t N) {
uint32_t r;
asm(“movl %[N], %%edx\n\t”
“mulxl %[x], %[r], %[r]”
: [r]”=r”(r)
: [x]”r”(x), [N]”r”(N)
: “edx”);
return r;
}
Doing this for floats (random within [0,x)) is also fairly easy since it just requires subtracting 32 from the exponent, which ldexp can do, eg ldexp(rand(), -32) gives a float in [0,1) (assuming rand() is 32-bit here).
There’s some loss of precision as floats discard lower-order bits automatically, unless we cast to a longer mantissa first.
Since I don’t read math proofs daily, this took me a bit of thinking to parse. At the end I realized all this was a variation of:
rand(1) * N
In order:
( rand(2^32) * N ) / 2^32
is equivalent to
( rand(2^32) / 2^32 ) * N
which is equivalent to
floor( (float)[0,1) * N )
Yes, for some fuzzy notion of “equivalent”. For example, rand(2^32) / 2^32 is not equivalent to picking a number in [0,1) fairly.
This is a godsend for FPGAs. Modulus by an arbitrary number is expensive in both time and space in hardware. Shifts are free and nearly all modern FPGAs have hard multiplier blocks. This is a perfect solution. Thanks !
I created a similar approach to fast range named “fast mod and scale”. I didn’t realize fast range until I started to survey alternatives for writing my blog post. http://www.idryman.org/blog/2017/05/03/writing-a-damn-fast-hash-table-with-tiny-memory-footprints/
I’m happy to see other people also interested in this problem. Even though solution is just one or two lines are code, it is still a important problem to solve!
It first mask the hash value to next power of two, then do the fast range (I named it scaling) described in your post. It cost a bit more cycles, but are small enough because of modern cpu pipelining. The major usage of fast mod and scale (my method) is to make hash table probing as easy to implement as using strait mod. For hash tables that doesn’t use probing, like cuckoo hashing or standard chaining, fast range is sufficient.
More analysis and implementation details are in the blog I posted above.
This also made it in the world’s strongest open source chess engine (stockfish) :
https://github.com/official-stockfish/Stockfish/commit/2198cd0524574f0d9df8c0ec9aaf14ad8c94402b
That’s amazing.
I’m using x64 laptop-windows 10 and try fast modulo function but not success. Result is always zero. Why??? Did I miss something?
Can you share the code that gives you always zero?
include
using namespace std;
uint32_t reduce(uint32_t x, uint32_t N)
{
return (((uint64_t)x * (uint64_t)N) >> 32);
}
int main()
{
cout<< reduce(12, 7);
return 0;
}
I tried in my laptop (x64) and “www.onlinegdb.com” but i received the same result is zero. Did i miss something?
What do you expect reduce(12,7) to do?
Yet wasn’t 7 chosen by a fair single cubic die roll?
I tried many different values as 20, 33, 5 (modulo for 7) and I received the same result is zero
Did you try random 32-bit numbers? Please do. Here is a sample program…
https://gist.github.com/lemire/b9596313593dcb6aa311f5e5aa60f517
Wow! Thank you very much!
Isn’t this basically fixed point arithmetic where the bottom 32 bits are the fractional part?
Basically, yes. It is really not hard conceptually.
Unless
x * N > (1<<shiftAmount)
the result is 0. I don’t know how you are computing fairness but your post is misleading.In fact, you’re returning the quotient of the product by 2 raised to power of shiftAmount => reduce = floor[(x*N) / 2^shiftAmount]
Since, in a hash table, you are unlikely to use a power of 2 for the bucket’s count, this is not going to work well, and you’ll get a lot of collision if you’re using a power of 2 for the divisor. Since N can not be in [0 ; powerOfTwoRange] this is useless.
If you know that N is in [0; 2^32] range, nice. But that’s condition that rarely happens in real life.
Let’s say you have a hash table to have a more compact storage of ID => Value, then this function will produce a huge amount of collision on the low bucket’s indexes and almost no collision on the last buckets.
Obviously, in benchmark where N are as probable to happen in [0 2^32] range, this method is as fair as a modulo. Yet, these rarely happens in reality and the cost to handle collisions in a hash table is many times more important than the cost of the modulo.
You’d probably get a better result by SWARing the product before dividing (that is,
p = x * N
p = p ^ (p>>32)
and so on) then masking by0xFFFFFFFF
. But at some point, I doubt you’ll beat a modulo operation.Note that the trick described in this blog post is used in many real-life systems, some of which you are maybe relying upon. It definitively works.
If you know that N is in [0; 2^32] range, nice. But that’s condition that rarely happens in real life.
You are expected to use this trick, like the modulo reduction, after hashing. You can hash to the [0, 2^32) range.
But at some point, I doubt you’ll beat a modulo operation
You need to hash your objects in all cases. You should never “just” use the modulo reduction. What you typically do is hash and then reduce. You can either reduce using the modulo reduction or using this trick.
Most hash functions, just like most random number generators, have output sizes that are powers of two.
This works great! Thank you!
Ok so what if the RNG output is 64-bit. Example, compiling xoshiro256** to a 32-bit executable, .
So, you can’t do this method on 64-bit output on 32-bit platforms because there are no 128-bit integers on 32-bit platforms!
On 32-bit systems, there are many optimizations you cannot do, so this technique should probably be the least of your worries.
Why not do real modulo though?
X % Y = (BITAND(CEILING(X*256/Y),255)*Y)>>8
The reciprocal 256/Y can be approximate
Yes, you can do this as well, though it is slightly more complicated. Please see
Faster Remainder by Direct Computation: Applications to Compilers and Software Libraries
Software: Practice and Experience 49 (6), 2019
https://arxiv.org/abs/1902.01961
Thanks. This is amazingly fast in my case of a bloomfilter probe function. Even faster than the SIMD implementation.
Correction. There was a bug in my implementation. The fast modulo is close to the performance of “power-of-two”. The SIMD (AVX512) method is still the fastest.
Hello Daniel,
Thank you for this technique; I have used it in a number of projects.
Today I was wondering about a generalization: say you have a hash output x (in range [0,232) like here), but want to extract two independent ranged values from it, with ranges N1 and N2 (where N1*N2 < 232). And it seems there is a very clean solution:
out1 = (x * N1) >> 32;
x2 = (uint32_t)(x * N1);
out2 = (x2 * N2) >> 32;
Effectively the multiplication x*N1 leaves us with a 64-bit number whose high bits are the first output, and the low bits are the remaining entropy – conveniently scaled to range [0,2**32) again, ready to be used for a second reduction.
It can be applied iteratively, though the quality of the extracted numbers will degrade as more entropy is extracted.
@Pieter It does not seem unreasonable.
An even better construction:
start with state = x
iterate over ranges N[i] of numbers to be extracted:
mask = ~N[i] & (N[i]-1)
tmp = state * range
output[i] = tmp >> 32
state = (state + (out & mask)) & 0xFFFFFFFF
This preserves all entropy; every iteration merely permutes the state. This means that all produced numbers are individually as uniformly distributed as extracting directly from x. Furthermore, it moves the “unused” portion of the entropy to the top of the state, so that it gets preferentially used in the next step. Some testing with small numbers seems to indicate that this actually produces optimal joint distributions of subsequently produced numbers (i.e. the distribution of output[k…k+j] is as well distributed as extracting a number with range N[k] * N[k+1] * … * [k+j] directly).
What is “out” in your pseudocode?
Oops, output[i].
I made another typo.
It is:
mask = ~N[i] & (N[i]-1)
tmp = state * range
output[i] = tmp >> 32
state = (tmp + (output[i] & mask)) & 0xFFFFFFFF
I wrote a bit more extensively about this idea: https://github.com/sipa/writeups/tree/main/uniform-range-extraction
Thanks for the link.
For N=255 you can also get rid of that multiplication by 255 – it can be replaced by an eight bit shift left and a subtraction of the original value.
A nice trick for N=255 is to note that 1/255 = 2^-8 + 2^-16 + 2^-24 + 2^-32 + … If you’re interested in handling 32-bit values and you’re on a 64-bit processor, you can get a lot of mileage out of shifts and adds built around this. I did one today that uses about 12 lines of x86_64 assembly, just shifts and add/subtract. The “wart” is that you do have to truncate that series somewhere, and consequently exact multiples of 255 will return 255, rather than 0. But one below goes to 254, and one above goes to 1, so it’s just a matter of catching the 255 results and replacing them with 0.
Sure beats the heck out of idiv.
I have try but not working, always return 0
https://godbolt.org/z/P51j1P3K3
Try to multiply numbers that cover the full range of values.
Thanks for as always insightful article! This technique is mentioned in
["Efficient Hash Probes on Modern Processors"][1]
but only in passing without a proof, analysis and implementation. I’d only suggest emphasizing a little more the fact that it’s not an alternative to `”x % N” as some may assume only to find out that for sequential identifiers with relatively small Ns they are getting 0s in production 🙂