6

Does anybody know about a powerful routine/algorithm (preferrably in scipy/python) to localise "all" the local minima for a scalar real function of N variables in a defined ("rectangular") region of N-dimensional vector space ?

the constrained and unconstrained minimization algorithms in scipy all return only a single minimum (global or local)

Saullo G. P. Castro
  • 56,802
  • 26
  • 179
  • 234
  • 9
    A function can have an infinite number of local minima in a bounded interval. For example, `f(x) = sin(1/x)` for `0 < x < pi`. – unutbu Nov 21 '12 at 22:46
  • obviously... my bad. Sorry for asking. – Mathias Vanwolleghem Nov 22 '12 at 07:40
  • This is a good question. I just think it would be challenging for an algorithm to generate them all -- even for smooth functions. – unutbu Nov 22 '12 at 10:54
  • 1
    I guess that one way to do it is to count the number of zeroes for the gradient and then subsequently cut up in smaller pieces until one ends up with either a single zero or none for the gradient in each region. I know that for scalars function there is a contour integral theorem that allows counting the difference between zeros and poles inside the contour. But I forgot its name. – Mathias Vanwolleghem Nov 22 '12 at 12:05
  • 3
    The [Argument principle](http://en.wikipedia.org/wiki/Argument_principle). – unutbu Nov 22 '12 at 12:33
  • Interval arithmetic: http://openopt.org/interalg (but does not work for completely arbitrary functions) – pv. Nov 22 '12 at 18:42

1 Answers1

3

Scipy's basinhopping has a callback argument that can be used to save all found minima.

For instance:

all_minima = []
def save_minima(x, f, accepted):
    all_minima.append(x)

basinhopping(func, x0, callback=save_minima)

Obviously, this doesn't return all local minima necessarily. But it does return all that it finds.

Julius
  • 735
  • 2
  • 6
  • 17