I've been asked to create a function which will take an unsigned 32bit int and unpack 3 signed 10 bit ints from the unsigned int. The context for this task was an unpacking function, which took a 32bit unsigned int as input and an array of length 3 called xyz and occupied the array with x, y and z co-ordinates unpacked from the original int, with these co-ordinates being 10bit signed ints.
I wrote the following program which seems to work fine for positive values, but not for negative values. I've tried printing the hex values of the output co-ordinates to get a better sense of the underlying binary and everything seems fine to me, but the interpreted number comes out wrong. For signed ints the compiler uses two's complement to interpret the binary.
void coordinates(unsigned int p, int xyz[3]) {
unsigned int z = ((p << 22) >> 22) & 0x800003FF;
xyz[0] = (~0 << 32) | (p >> 20) ;
xyz[1] = (~0 << 32) | (p >> 10) & 0x3FF;
xyz[2] = (~0 << 32) | z;
}
Let me know if you have any further questions and I'll do my best to answer them.