3

Consider the following problem:

import numpy
import scipy.optimize


def f(x):
    return (x[0] == 1)*(x[1] + 2)**2 - (x[0] == 0)*(x[1] + 1)**2


kwargs = {
        'method': 'trust-constr',
        'jac': False,
        'bounds': [(0, 1), (0, 1)],
    }


m = scipy.optimize.minimize(f, numpy.array([1, 0]), **kwargs).x
print(m)
# [0.91136811 0.19026955]  <- wrong result

I would like to optimize this function on the space

  • x[0] \in {0,1}
  • x[1] \in [0,1]

Is there any way to specify that x[0] should not be a real value (i,e. a value on the line [0,1]), and instead only either 0 or 1?

My current take would be to perform N optimizations, one per option of x[0]. The problem is that this quickly explodes if there are multiple categorical variables.

Jorge Leitao
  • 19,085
  • 19
  • 85
  • 121
  • 1
    Maybe you can have a look at the `hyperopt` package. But please check if it meets your needs regarding precision and performance. I think it was created mainly for hyper parameter optimization in machine learning where precision probably isn't the biggest issue. In hyperopt you can define some values to be discrete (or even categorical) while others are continuous. – jottbe Aug 28 '19 at 07:17
  • The general approach would be mixed-integer nonlinear programming (e.g. Couenne). But if this is feasible depends on so many details. – sascha Sep 01 '19 at 08:55

1 Answers1

0

Is there any way to specify that x[0] should not be a real value (i,e. a value on the line [0,1]), and instead only either 0 or 1?

wrapdisc is a package that is a thin wrapper which will let you optimize categorical, integer, and float variables with various scipy.optimize optimizers. It encodes each of these variables into floats. There is a usage example in its readme. With it, you don't have to adapt your objective function or perform n optimizations.

Asclepius
  • 57,944
  • 17
  • 167
  • 143