I wanted to check that some big calculated memory needs (stored in an unsigned long long
) would be roughly compatible with the memory model used to compile my code.
I assumed that right-shifting the needs by the number of bits in a pointer would result in 0 if and only if memory needs would fit in the virtual address space (independently of practical OS limitations).
Unfortunately, I found out some unexpected results when shifting a 64 bit number by 64 bits on some compilers.
Small demo:
const int ubits = sizeof (unsigned)*8; // number of bits, assuming 8 per byte
const int ullbits = sizeof (unsigned long long)*8;
cout << ubits << " bits for an unsigned\n";
cout << ullbits << " bits for a unsigned long long \n";
unsigned utest=numeric_limits<unsigned>::max(); // some big numbers
unsigned long long ulltest=numeric_limits<unsigned long long>::max();
cout << "unsigned "<<utest << " rshift by " << ubits << " = "
<< (utest>>ubits)<<endl;
cout << "unsigned long long "<<ulltest << " rshift by " << ullbits << " = "
<< (ulltest>>ullbits)<<endl;
I expected both displayed rshit results be 0.
This works as expected with gcc.
But with MSVC 13 :
- in 32 bits debug: the 32 bit rshift on unsigned has NO EFFECT ( displays the original number) but the 64 bit shift of the unsigned long long is 0 as expected.
- in 64 bits debug: the rshift has NO EFFECT in both cases.
- in 32 and 64 bits release: the rshif is 0 as expected in both cases.
I'd like to know if this is a compiler bug, or if this is undefined behaviour.