For this kind of calculation to be exact, one must either calculate all the divisions and logarithms exactly -- or one can work backwards.
-round(log2(x)) == round(log2(1/x))
, meaning that one of the divisions can be turned around to get (1/x) >= 1.
round(log2(x)) == floor(log2(x * sqrt(2))) == binary_log((int)(x*sqrt(2)))
.
One minor detail here is, if (double)sqrt(2)
rounds down, or up. If it rounds up, then there might exist one or more value x * sqrt2 == 2^n + epsilon
(after rounding), where as if it would round down, we would get 2^n - epsilon
. One would give the integer value of n
the other would give n-1
. Which is correct?
Naturally that one is correct, whose ratio to the theoretical mid point x * sqrt(2)
is smaller.
x * sqrt(2) / 2^(n-1) < 2^n / (x * sqrt(2))
-- multiply by x*sqrt(2)
x^2 * 2 / 2^(n-1) < 2^n
-- multiply by 2^(n-1)
x^2 * 2 < 2^(2*n-1)
In order of this comparison to be exact, x^2 or pow(x,2)
must be exact as well on the boundary - and it matters, what range the original values are. Similar analysis can and should be done while expanding x = a/b
, so that the inexactness of the division can be mitigated at the cost of possible overflow in the multiplication...
Then again, I wonder how all the other similar applications handle the corner cases, which may not even exist -- and those could be brute force searched assuming that average
and total
are small enough integers.
EDIT
Because average
is an integer, it makes sense to tabulate those exact integer values, which are on the boundaries of -round(log2(average))
.
From octave: d=-round(log2((1:1000000)/30.0)); find(d(2:end) ~= find(d(1:end-1))
1 2 3 6 11 22 43 85 170 340 679 1358 2716
5431 10862 21723 43445 86890 173779 347558 695115
All the averages between [1 2( -> 5
All the averages between [2 3( -> 4
All the averages between [3 6( -> 3
..
All the averages between [43445 86890( -> -11
int a = find_lower_bound(average, table); // linear or binary search
return 5 - a;
No floating point arithmetic needed