Question: what is the highest value that can be accurately represented to one decimal place by IEEE-754 32-bit floating point data?
Background: I've found this question which asks: Which is the first integer that an IEEE 754 float is incapable of representing exactly?
...and that all makes sense, but I'm not sure how to translate the method given there to my question.
My application is: I'm writing a totaliser function which totalises weights to one decimal place, storing them in a 32-bit float. At some point, if it is not reset, this totaliser will begin to lose accuracy. I want to determine what that point is so I can either alert a user that the totaliser is no longer accurate, or to automatically reset it.