I'm having some trouble trying to get a floating point output. What I'm trying to do is take the output of an accelerometer on 3 axes. The output from each axis has a high and low byte that are combined into a single variable. The magnitude of these three variables is then computed thusly: mag = sqrt(x*x + y*y + z*z). The output from each axis is output as a 16-bit signed integer.
At least, that's how it should work, but I can't even manage to multiply two large numbers together to produce the correct value in my program, and I see no reason for it as the variables should all be plenty large to hold any results. Here's the code:
double xAccel = 0; // 16 bit X acceleration value
double accelSum = 0;
xAccel = xAccelRead();
accelSum = 10000*10000;
char aMag[64];
sprintf(aMag, "Accel Mag: %.2f", accelSum);
clearDisplay();
writeText(aMag,0,0,WHITE,BLACK,1);
OLED_buffer();
The output of xAccelRead() is a 16 bit int. Normally, "accelSum" would be set equal to the magnitude equation given above, but for now the static numbers aren't even working. If I set the equation to 100^2, it works. But 10000 * 10000 doesn't work. The result of that should be 100,000,000. But the output I get is:
Accel Mag: -7936.00
I can't understand why this is. I've tried setting the types to int32, int64, and now floating point. Same problem with all of them. I set the correct linker options in Atmel Studio to allow floating point sprintf support, so that's not the problem. I'm guessing there is an overflow somewhere but I can't figure out where. All variables involved have types that are more than large enough to accommodate max values in the hundreds of billions, which is far more than I need anyway.
Say I set accelSum = 1000 * 1000. That's a mere million. Plenty small for an int32 to hold. But my output is:
Accel Mag: 16960.00
Even with 200 * 200, the output is -25536.00.
This has to be some stupid simple problem. If anybody can help me out I'd really appreciate it!