I get some values from hardware registers where values are stored in 16-bit unsigned integers but these values are actually signed. Knowing the last bit is the sign bit, a colleague has done the following snippet to convert them to 2's complement values :
/* Take 15 bits of the data (last bit is the sign) */
#define DATAMASK 0x7FFF
/* Sign is bit 15 (starting from zero) with the 15 bit data */
#define SIGNMASK 0x8000
#define SIGNBIT 15
int16_t calc2sComplement(uint16_t data)
{
int16_t temp, sign;
int16_t signData;
sign = (int16_t)((data & SIGNMASK) >> SIGNBIT);
if (sign)
{
temp = (~data) & DATAMASK;
signData = (short)(temp * -1);
}
else
{
temp = (data & DATAMASK);
signData = temp;
}
return(signData);
}
As far as I know, unsigned integers types and signed integers types only differs by their type and the meaning of the last bit ; so casting such as following should work as well :
int16_t calc2sComplement(uint16_t data)
{
return(static_cast<int16_t>(data));
}
and when needing to push values to the hardware, the reverse operation is straightforward, unlike the calculation. The advantage of the former solution is it's toolchain-free ; since it can change sooner or later (gcc 4.4.7, and so C++03), I would prefer not having to do it but there won't be any regression when compiled years after. The advantage of the latter is it's more readable, close to standard and avoid unnecessary operations.
What would be the best in my case to be sure to keep the same behaviour if compiled again after a toolchain change (even the standard types are redefined somewhere in the toolchain and I do not really have the hand on it) ? If you would keep the first solution, how would improve it and/or code the reverse conversion (keep in mind that data can be a pointer over a buffer of data) ?