My head is starting to hurt... I've been looking at this way too long.
I'm trying to mask the most significant nibble of an int, regardless of the int bit length and the endianness of the machine. Let's say x = 8425
= 0010 0000 1110 1001
= 0x20E9
. I know that to get the least significant nibble, 9
, I just need to do something like x & 0xF
to get back 9
. But how about the most significant nibble, 2
?
I apologize if my logic from here on out falls apart, my brain is completely fried, but here I go:
My book tells me that the bit length w of the data type int can be computed with w = sizeof(int)<<3
. If I knew that the machine were big-endian, I could do 0xF << w-4
to have 1111
for the most significant nibble and 0000
for the rest, i.e. 1111 0000 0000 0000
. If I knew that the machine were little-endian, I could do 0xF >> w-8
to have 0000 0000 0000 1111
. Fortunately, this works even though we are told to assume that right shifts are done arithmetically just because 0xF
always gives me the first bit of 0000
. But this is not a proper solution. We are not allowed to test for endianness and then proceed from there, so what do I do?