int length = (int) floor( log10 (float) number ) + 1;
My question is essentially a math question: WHY does taking the log10() of a number, flooring that number, adding 1, and then casting it into an int correctly calculate the length of number?
I really want to know the deep mathematical explanation please!