I wrote a generic function to convert a binary reflected gray code to standard binary. I used an algorithm I found on this page. Here is the aforementioned algorithm:
unsigned short grayToBinary(unsigned short num)
{
unsigned short temp = num ^ (num>>8);
temp ^= (temp>>4);
temp ^= (temp>>2);
temp ^= (temp>>1);
return temp;
}
Then I modified the code so that it would work for any standard unsigned
type. Here is what I wrote:
template<typename Uint>
Uint grayToBinary(Uint value)
{
for (Uint mask = sizeof(Uint)*4 ; mask ; mask >>= 1)
{
value ^= value >> mask;
}
return value;
}
The algorithm seems to work fine for every unsigned
standard type. However, when writing it, I instinctively used sizeof(Uint)*4
since it made sense that the end condition would depend on the type size, but the truth is that I have no idea what sizeof(Uint)*4
actually represents. For now, it is a magic number that I wrote instinctively, but I am unable to explain why it works with *4
and not with any other coefficient.
Does anybody know what this magic number actually correspond to?