1) As I know, the computer converts the decimal numbers to binary digits and transacts as it. For example, when we add the decimal numbers like "12" and "37" in computer's calculator. Is it correct?
2) If first of my question is correct, how the binary expression at below is interpreted by CPU in single precision? (How the result is shown us? How the computer convert this expression which was converted to floating point numbers to decimal again? and How can we convert this expression which was converted to floating point numbers to decimal again?)
0 ll 01111110 ll 01100110011001100110100
I mean how we knew the result is 0.70000005 in single precision in this expression: Floating Point Arithmetic
3) As I know, the computer does same transactions as in the video I shared when we add decimally 0.1 and 0.6 in computer's calculator. However, the calculator hides .0000005 binary section from us (as in the video result) and shows us decimal number “0.7” as a result but how it hide or delete that section? Why it didn't showed us a number like 0.71 by rounding the,
0 ll 01111110 ll 01100110011001100110100 floating point number?