I've been writing a decimal to single-precision IEEE754 floating-point converter, and I found a few discrepancies with numbers like 0.1 and 0.2
Let's take 0.1, the first step would be converting it into a simple binary representation (multiplying the fractional part by 2 and taking the integral part) This gives me a well-known recurring binary pattern (001100110011...)
The final binary representation of 0.1 (up to 26-bits) is 0 . 00 0110 0110 0110 0110 0110 0110.
To fit it into a 32-bit floating-point number, the next step is to normalize it by shifting the decimal point 4 times to the right, removing the leading 1., and truncating it to 23 bits. This leaves me with 10011001100110011001100. Most programming languages give me 10011001100110011001101 (with the last bit 1 instead of 0). What am I doing wrong here?