2

I'm looking to minimize a function with potentially random outputs. Traditionally, I would use something from the scipy.optimize library, but I'm not sure if it'll still work if the outputs are not deterministic.

Here's a minimal example of the problem I'm working with:

def myfunction(self, a): 
    noise = random.gauss(0, 1)
    return abs(a + noise)

Any thoughts on how to algorithmicly minimizes its expected (or average) value?

A numerical approximation would be fine, as long as it can get "relatively" close to the actual value.

We already reduced noise by averaging over many possible runs, but the function is a bit computationally expensive and we don't want to do more averaging if we can help it.

Nick ODell
  • 15,465
  • 3
  • 32
  • 66
Lyjat
  • 121
  • 3
  • I'm not sure if it makes sense to minimize a function whose output is random - how would you even **define** the minimum of such a function? If the amount of noise is dependent on the value of the independent variable, even a flat function such as `f(x)=0` would potentially take a minimal value in different places, depending on the distribution of the noise. Why can't you simply minimize the averaged sampling you already got? – loopbackbee May 15 '14 at 16:55
  • Also, can't you perform regression on the function, and minimize non-numerically? – loopbackbee May 15 '14 at 16:56
  • Do you need a correct value of the minimum or approximate? In the latter case you could do simulated annealing, for example. – Sleepyhead May 15 '14 at 17:46
  • The minimum f(x) is realization dependent if f(x) is random. Perhaps you want the minimum expected value of f(x)? – pjs May 15 '14 at 18:39
  • Yeah. I want the minimum expected value. I'll update the main post. Ideally I'd like to minimize numerically; I just need a pretty good estimate. – Lyjat May 15 '14 at 19:20
  • @goncalopp I'm pretty sure the noise in question does not depend on the value of the function, or at least not majorly so. Averaging over a couple runs isn't quite good enough. It reduces the amount of variance, but doesn't completely get rid of it. I was hoping to use a hybrid approach of averaging + something else? to get a value. – Lyjat May 15 '14 at 19:24
  • Check out [this paper](http://www.informs-sim.org/wsc11papers/359.pdf). The authors proposed a very innovative approach to stochastic root finding which converges amazingly fast. I'm not certain this applies to your problem, but think it might. Be forewarned, there's some serious math, but the algorithm implementation seems straightforward. – pjs May 15 '14 at 20:30
  • Many minimization algorithms require that a function be at least somewhat predictable (some combination of continuous, differentiable, etc.), and will thus have a difficult time minimizing a random-valued (i.e. non-continuous, non-differentiable) function. Even more so if `f(x) != f(x)` for the same value of `x` on different calls... – twalberg May 20 '14 at 15:58

1 Answers1

0

It turns out that for our application using scipy.optimize anneal algorithm provided a good enough estimate of the local maximum.

For more complex problems, pjs points out that Waeber, Frazier and Henderson (2011) link provides a better solution.

Lyjat
  • 121
  • 3