I am trying to get a better understand of floating point arithmetic. I know machine epsilon (e) is define as the difference between 1 and the next largest number (i.e. the next largest number after 1 that can be accurately represented in floating point is 1+e).
However, what do I get in floating point when I multiply (1+e) * (1+e)? Theoretically it should be 1 + 2*e+ e^2, but (assuming e<1) e^2 < e so e^2 will not be completely accurate. What does this answer round to in floating point?