I'm stuck on a homework assignment; I need to convert a binary float to a decimal fraction. I feel like I understand the process, but I'm not getting the right answer. Here's my thought process.
I have the binary float: 0 000 101
- The bias for a 3-bit exponent field is 3:
2^(3-1)-1 = 3
- The mantissa becomes
1.101
(base 2) - The value of the exponent bits, 0, minus the number of exponent bits, 3, is -3, so the decimal of the mantissa gets moved left 3 places
0.001101
- In base-10, that is
2^-3 + 2^-4 + 2^-6
, which equals 0.203125 or 13/64.
However, 13/64 is not the correct answer, the auto-grader doesn't accept it. If my answer is wrong, then I don't understand why, and I'm hoping someone can point me in the right direction.
By pure luck I guessed 5/32 as the answer and got it correct; I have no idea why that's the case.