I am writing program which takes floating number from input and outputs hex representation of this number.
What I did to solve it was:
- Divide number to whole and decimal part.
- Convert whole and decimal parts to binary representation. a) Whole number divided by 2, if number mod2 ==1, add 1, else add 0. Turn whole number around. b) Decimal part is being multiplied by 2, if it's higher than 1, substract 1 and do it again.
- If number is less than 0, sign is 1, else sign is 0.
- Get exponent (127+/-x), convert it to binary (whole).
- Get mantissa.
- Convert sign+exponent+mantissa to hex.
My program is passing through every test there was on SPOJ forum. I had to look for manual test cases where it fails myself.
So in case of number -123123.2323 I received number:
(hex) c7 f0 79 9d
Binary whole=11110000011110011 Binary decimal=0011101101111000000000 ....
Mantissa=11100000111100110011101
Meanwhile https://www.h-schmidt.net/FloatConverter/IEEE754.html gives me:
Mantissa=11100000111100110011110
How does it work and why this way in that case? Using https://www.rapidtables.com/convert/number/decimal-to-binary.html?x=0.2323 when I convert 0.2323 to binary it gives me 0.0011101101111. I used this part to finish the mantissa after adding whole binary number (-1), up to 23 bits. What am I doing wrong?