Background: When doing a binary search on an array, we initialise an upper h and lower l bound and test the mid-point (the arithmetic mean) half-way between them m := (h+l)/2. Then we either change the upper or lower bound and iterate to convergence.
Question: We can use a similar search strategy on the (unbounded) real numbers (well their floating point approximation). We could use a tolerance of convergence to terminate the search. If the search range is on the positive real numbers (0<l<h), we could instead take the geometric mean m :=(hl)^.5. My question is, when is the geometric mean faster (taking few iterations to converge)?
Motivation: This cropped up when I tried a binary search for a continuous variable where the initial bounds were very wide and it took many iterations to converge. I could use an exponential search before a binary search, but I got curious about this idea.
What I tried: I tried to get a feel for this by picking a million random (floating point) numbers between 2 and an initial h that I picked. I kept the initial l=1 fixed and the target had to be within a tolerance of 10^-8. I varied h between 10^1 and 10^50. The arithmetic mean had fewer iterations in about 60-70% of cases.
But the geometric mean is skewed (below the arithmetic mean). So when I restricted the targets to be less than the geometric mean of the initial bounds sqrt(lh) (still keeping l=1) the geometric mean was almost always faster (>99%) for large h>10^10. So it seems that the both h and the ratio of target / h could be involved in the number of iterations.
Code Example: Here's a simple Julia code to demonstrate:
function geometric_search(target, high, low)
current = sqrt(high * low)
converged = false
iterations = 0
eps = 1e-8
while !converged
if abs(current - target) < eps
converged = true
elseif current < target
low = current
elseif current > target
high = current
end
current = sqrt(high * low)
iterations += 1
end
return iterations
end
target = 3.0
low = 1.0
high = 1e10
println(geometric_search(target, high, low))