How many bits does fixed-point number need to be at least as precise as floating point number? If I wanted to carry calculations in fixed-point arithmetic instead of floating-point, how many bits would I need for the calculations to be not less precise?
Single precision (32-bits) float can represent numbers as small as 2^-126 and as large as 2^127, does it mean the fixed point number has to be at least in 128.128 format? (128 bits for integer part, 128 bits for fractional part).
I understand that single precision floats can represent only range of ~7 decimal digits at a time, I'm asking about all possible values.
And what about double precision (64-bits floats), does it really take 1024.1024 format to be equally precise?