Recently, I am fully confused by how floating point was represented. I konw that from wiki(https://en.wikipedia.org/wiki/Single-precision_floating-point_format) that there are 23 digitals used to store decimal, which means the precision can only goes up to 7 decimal digitals, which is around 0.0000001. However, on the wiki page, it says the minimal value it can represent is: 0 00000001 00000000000000000000000 = 2−126 ≈ 1.175494351 × 10−38
so this is really wried. if the precision can only goes to 7 decimal digits in floating point, how could the smallest value even smaller than precision? isn't this mean that any number can be stored up to 38 decimal digits? which in that case, say if a number is 1.1234567891011121314, the floating point can extend to the last digit? I am really confused.