1

I need to approximate an unknown value, a bound that separates divergent values from those that converge.

I'm trying to do so like this:

# dont worry about the value of i, its one of many bounds checks
bounds = 1.0
for j in range(niters):
    if does_converge(i, bound):
        bound += bound / 2.0
    else:
        bound /= 2.0

I've been googling for better approximation algorithms, but they all seem to assume I know something about the function, but I don't. All I get is a black box that tells me whether or not a value diverges or not.

Any ideas would be appreciated!

edit: I can't say for certain, but I would be fine assuming the function is continuous, and the boundary of convergence is most likely between 0 and 1.

sascha
  • 32,238
  • 6
  • 68
  • 110
opticaliqlusion
  • 327
  • 2
  • 13
  • 1
    Without any assumptions this can't be solved. [Intro](https://en.wikipedia.org/wiki/Regularization_(mathematics)). – sascha Sep 17 '17 at 21:53
  • Im fine with some guesses. The function is probably continuous and "well-behaved". – opticaliqlusion Sep 17 '17 at 21:54
  • 3
    Your question is not clear to me. where does this code fail ? what would you expect it to do ?what is it doing now ? how does the input look like ? In other words: help us help you ;) – Nir Alfasi Sep 17 '17 at 21:55
  • Set an objective function and use optimization start with scipy https://docs.scipy.org/doc/scipy/reference/optimize.html – pyCthon Sep 17 '17 at 21:56
  • @sascha maybe you're right but I don't see anything in the question that indicates any relationship to regularization. – Nir Alfasi Sep 17 '17 at 21:56
  • @alfasin Every assumption about the function is some form of regularization. Without assumptions, there is nothing better than trying random-values from -.inf, inf. I just mentioned it to make clear, that those assumptions he mentioned are usually important, especially in gradient-free / black-box optimization. – sascha Sep 17 '17 at 21:57
  • @pyCthon That's a broad advice as most algorithms in scipy are unusable with potentially noisy black-box / derivative-free functions. You should be more specific. – sascha Sep 17 '17 at 21:59
  • I updated the post to reflect some 'safe assumptions' – opticaliqlusion Sep 17 '17 at 22:03
  • Noisy? Clear boundary? How long does the evaluation take? Is there anything else than a yes/no answer? – sascha Sep 17 '17 at 22:04
  • There is a clear boundary, only yes/no, the evaluation is difficult because I don't know how close I am, I can only try to go above, then below, always trying to tighten the gap. – opticaliqlusion Sep 17 '17 at 22:17

1 Answers1

1

With the given information, there is nothing better available than some form of binary-search.

Edit: See edit/remark at the end of this answer for a better solution (although without a rigorous theoretic explanation)!

This can be implemented using scipy's minimize_scalar. It is important to use method: golden!

Method Golden uses the golden section search technique. It uses analog of the bisection method to decrease the bracketed interval.

The problem is the absence of any real-valued answer. Only yes/no does not allow to form any kind of gradient-information or surrogate-model.

I'm assuming:

  • we are looking for the smallest value at which the black-box returns 1
  • the black-box is deterministic

Idea: build some wrapper-function, which has a minimum at the smallest value for which 1 is returned.

As x should be in [0,1], trying to minimize x, we can formulate the wrapper-function as: x + 1 - black_box(x). Every solution with answer 0 is >= every solution with answer = 1 (probably some safeguard needed at the bound; e.g. x + (1 - eps) - black_box(x) with eps very small!; might need to be chosen with xtol in mind).

Code:

from scipy import optimize

SECRET_VAL = 0.7

def black_box(x):
    if x > SECRET_VAL:
        return 1.
    else:
        return 0.

def wrapper(x):
    return x + 1 - black_box(x)

res = optimize.minimize_scalar(wrapper, bracket=(0,1), method='golden')

print(res)

Output:

     fun: 0.7000000042155881
    nfev: 44
     nit: 39
 success: True
       x: 0.7000000042155881

Or with secret_val=0.04:

     fun: 0.04000000033008555
    nfev: 50
     nit: 45
 success: True
       x: 0.040000000330085564

Or if you know what kind of accuracy you need (original secret 0.7):

res = optimize.minimize_scalar(wrapper, bracket=(0,1), method='golden',
                            options={'xtol': 1e-2})

Output:

     fun: 0.7000733152965655
    nfev: 16                 !!!!!
     nit: 11
 success: True
       x: 0.7000733152965655

Remark:

It might be better to write a customized binary-search based solution here (not 100% sure). But one needs to be careful then given the assumptions like missing unimodality.

Edit: Okay... i finally managed to transform this minimization-problem to a root-finding problem, which can be solved more efficiently!

Warning: It's clear, that wrapper is never returning the value of 0.0 (no exact root to find)!

But the bisection-method is about a zero crossing within the new interval wiki.

So here it finds two points a, b, where the signs of the function are changing and is interpreting this as a root (given some tolerance!).

This analysis is not as rigorous as the one compared to the former method (not much analysis given, but easier to do in the pure minimization-approach given scipy's documentation).

def wrapper_bisect(x):
    return 1 - 2*black_box(x)

res = optimize.bisect(wrapper_bisect, 0, 1, xtol=1e-2, full_output=True)
print(res)

Output:

(0.6953125,       converged: True
           flag: 'converged'
 function_calls: 9
     iterations: 7
           root: 0.6953125)

Given the assumptions above (and only these), this should be the theoretically optimal algorithm (we reduced the number of function-evaluations from 16 to 9; the optimization-objective is worse, but within the bounds)!

One last test:

secret: 0.9813; xtol: 1e-4:

Golden:

    fun: 0.9813254238281632
    nfev: 25
     nit: 20
 success: True
       x: 0.9813254238291631

Bisection:

(0.98126220703125,       converged: True
           flag: 'converged'
 function_calls: 16
     iterations: 14
           root: 0.98126220703125)
sascha
  • 32,238
  • 6
  • 68
  • 110
  • This looks great - I'm having trouble installing Scipy on Python 3.6 Windows 10 x64. I'll let you know when / if I can test it out without porting my project to linux. – opticaliqlusion Sep 17 '17 at 23:22
  • There are tons of scipy + windows questions here and i recommend the [anaconda-distribution](https://www.anaconda.com/download/) (which is one of few painless methods). – sascha Sep 17 '17 at 23:25
  • Im toying around with permutations of this, but always seem to get: RuntimeError: Too many iterations. – opticaliqlusion Sep 18 '17 at 00:11
  • That's a bad sign. Maybe one of our assumptions is wrong! Sadly i can't test this. Increase the number of iterations or decrease xtol (and use the better version from my edit). – sascha Sep 18 '17 at 00:12
  • 1
    Yep this works nicely. Had to hammer out a new black box function, but it all works. Thanks! – opticaliqlusion Sep 18 '17 at 01:10