2

I have a function func_x() that I am trying to minimize using scipy.optimize.minimize_scalar().

func_x() also calls another function func_y() whose result func_x() uses in part to calculate the final scalar value. I want the optimization to also have a constraint on the value of func_y() such as a minimum or max value for func_y()'s result. In my future cases there may also be other helper functions, but the commonality is, given a scalar input x, they will also return a scalar value for func_x() to use.

from scipy.optimize import minimize_scalar
def func_y(x):
    return x^2-1/x
def func_x(x):
    return (x - 2) * func_y(x) * (x + 2)**2

res = minimize_scalar(func_x, bounds=(-10, 10), method='bounded')
res.x

Is there anyway to enforce a constraint like func_y(x) > 1 within scipy.optimize. minimize_scalar()?

I checked the documentation - I believe the bounds parameter only sets the optimization floor/ceiling for the scalar input x.

Based on user ekrall's suggestion, I also looked into scipy.optimize.minimize() with the usage of the constraints parameter

from scipy.optimize import minimize

def constraint1(x):
    return func_y(x)-1

con1 = {'type': 'ineq', 'fun': constraint1}


which should check that func_y(x) >= 1

hamhung
  • 53
  • 8
  • It sounds like you are wanting to solve a constrained optimization having multiple constraints. scipy.optimize.minimize() allows you to pass in constraints. – ekrall Jan 14 '22 at 03:24
  • I looked into it a little more and the other SO examples I found seem to concur. I'm just not sure whether the constraints would work for other types of helper/custom functions or if I was trying to use something like a linear regression model to generate the output of func_y() given x. – hamhung Jan 14 '22 at 03:53
  • Are you sure you mean `x^2` and not `x**2` in `func_y`? – azelcer Jan 15 '22 at 23:07
  • @azelcer you are correct on that error – hamhung Jan 21 '22 at 05:33

1 Answers1

2

I would also advice you to use minimize. You just have to be aware of the limitations

Constraints definition (only for COBYLA, SLSQP and trust-constr).

And also

Note that COBYLA only supports inequality constraints.

From this we conclude that either SLSQP or trust-constr must be choosen.

With trust-constr the results are fine

res = minimize(func_x, 5, method='SLSQP', bounds=[[-10, 10]], 
               constraints=[{'type': 'ineq', 'fun': lambda x: 1-func_y(x)}])
print(res.x, res.fun, func_y(res.x), res.success)
res = minimize(func_x, 5, method='SLSQP', bounds=[[-10, 10]], 
               constraints=[{'type': 'ineq', 'fun': lambda x: func_y(x)-1}])
print(res.x, res.fun, func_y(res.x), res.success)
res = minimize(func_x, 5, method='SLSQP', bounds=[[-10, 10]])
print(res.x, res.fun, func_y(res.x), res.success)

gives

[1.32467216] [-7.4635986] [0.99985257]
[1.59008719] [-10.0354401] [1.89948096]
[1.59008719] [-10.0354401] [1.89948093]

However this type constraints are not working properly for SLSQP.

Another way to represent constraints is as NonlinearConstraint or LinearConstraint, in that case SLSQP works fine

res = minimize(func_x, 5, method='trust-constr', bounds=[[-10, 10]], 
               constraints=[NonlinearConstraint(func_y, lb=1, ub=1.5)])
print(res.x, res.fun, func_y(res.x))
res = minimize(func_x, 5, method='SLSQP', bounds=[[-10, 10]], 
               constraints=[NonlinearConstraint(func_y, lb=1, ub=1.5)])
print(res.x, res.fun, func_y(res.x))

gives

[1.47559988] [-9.50009675] [1.49970451]
[1.47568652] [-9.50087235] [1.5]

An important detail is that the constraint func_y(x) > 1 splits your domain in two parts, the objective function is better on the left, but the method will probably only explore the part on the right.

enter image description here

Bob
  • 13,867
  • 1
  • 5
  • 27