I have a general question about floating-point arithmetic.
Recently, I got quite interested in understanding of the computing in programmes, thus I started to solve exercises. I would like you to explain the one which especially confuses me:
Compute machine epsilon (not only as a decimal value, but also as the number of the bits of the binary exponent). Does machine epsilon depend on the number of bits of the mantissa or the number of bits of the exponent?
Here are my calculus:
def exponent():
expon = 0
for number in range(1000):
if 1.0+2.0**(-number)>1.0:
expon = number
return expon
print(exponent())
print(2**(exponent())) # Prints decimal value
Output:
52
2.220446049250313e-16
Is it correct? I have a problem with interpretation of the bolded text. Do I have to determine whether it is 8 or 11 bits? How can I do that? Is it right assumption, that the epsilon depends on the number of bits of the mantissa, because they determine the precision of the float?