I'm looking for, a hopefully simple way to find, the least significant digit in a number (float or integer).
For example:
101 --> 1
101.2 --> 0.2
1.1003 --> 0.0003
Cases like 10, 100, 1000 etc are slightly less relevant, though I'd be interested regardless.
I don't particularly care about the format so if it makes more sense to identify the result in a more practical manner feel free.
For example:
101.2 --> 0.1 or 10^-1 (or even -1 for the relevant power/exponent) is fine.
For additional context, what I am trying to do:
Lets say I have a number 31.6 I want to be able to turn that in into 31.5 (decrease the least significant digit by 1). Lets say I have the number 16, I want to turn that into 15 (same idea). If I have the number 100, I still want to turn that into 99 but I can probably figure out the case handling there (without specifying 100.0, I think the 1 is the least significant digit and decreasing by 1 gives you 0, which is not what I want). This in turn is related to manually manipulating class breaks in data ranges (going off topic though).
Showing my homework: This looks promising (the leastSigDigit part), but it returns the digit without any (mathematical) reference to the power of the digit (and I'm hoping to avoid lots of string conversions too).
This was what came up when I searched stackoverflow but doesn't seem helpful (if I'm using the wrong terminology/search phrases my apologies in advance)