1

I am working with a fairly complex objective function that I am minimizing by varying 4 parameters. A while ago I decided to use the Python framework Mystic, which seamlessly allows me to use penalties for complex inequalities (which I need).

However, Mystic has a less-than-obvious way to assign hard constraints (not inequalities and not bound constraints, linear inequalities between parameters only) and even less obvious way to handle them.

All my 4 parameters have finite lower and upper bounds. I would like to add a linear inequality as a hard constraint like this:

def constraint(x):  # needs to be <= 0
    return x[0] - 3.0*x[2]

But if I try to use Mystic in this way:

from mystic.solvers import fmin_powell
xopt = fmin_powell(OF, x0=x0, bounds=bounds, constraints=constraint)

Then Mystic insists in calling the objective function to resolve the constraints first and then proceed with the actual optimization; since the objective function value has no impact nor any effect on the constraint function as defined above then I am not sure why this is happening. The constraint function defined above simply tells Mystic that a region of the hyperparameters search space should be off limits.

I have scoured pretty much all the examples in the Mystic folder and I stumbled across an alternative way to define a hard constraint: use a penalty function and then call a magic method "as_constraint" to "convert it" to a constraint. Unfortunately, all those examples go pretty much this way:

from mystic.solvers import fmin_powell
from mystic.constraints import as_constraint
from mystic.penalty import quadratic_inequality

def penalty_function(x): # <= 0.0
    return x[0] - 3.0*x[2]

@quadratic_inequality(penalty_function)
def penalty(x):
    return 0.0

solver = as_constraint(penalty)

result = fmin_powell(OF, x0=x0, bounds=bounds, penalty=penalty)

There is this magic line:

solver = as_constraint(penalty)

That I can't see what it's doing - the solver variable is never used again.

So, for the question: is there any way to define linear inequalities in Mystic that do not involve an expensive pre-solve of the constraints but simply tell Mystic to exclude certain regions of the search space?

Thank you in advance for any suggestion.

Andrea.

Infinity77
  • 1,317
  • 10
  • 17
  • `solver` isn't used in the optimization... it's just showing how to convert penalties to constraints. If you wanted to use it, you'd add `constraints=solver` to `fmin_powell`'s kwds. – Mike McKerns Jan 05 '19 at 20:18

1 Answers1

1

What mystic does is map the space it searches, so you are optimizing over a "kernel transformed" space (to use machine learning jargon). You can think of the constraints as applying an operator, if you know that that means. So, y = f(x) under some constraints x' = c(x) becomes y = f(c(x)). This is why the optimizer evaluates the constraints before evaluating the objective.

So you can build a constraint like this:

>>> import mystic.symbolic as ms
>>> equation = 'x1 - 3*a*x2 <= 0'
>>> eqn = ms.simplify(equation, locals=dict(a=1), all=True)
>>> print(eqn)
x1 <= 3*x2
>>> c = ms.generate_constraint(ms.generate_solvers(eqn, nvars=3))
>>> c([1,2,3])
[1, 2, 3]
>>> c([0,100,-100])
[0, -300.0, -100]

Or if you have more than one:

>>> equation = '''
... x1 > x2 * x0     
... x0 + x1 < 10
... x1 + x2 > 5
... '''
>>> eqn = ms.simplify(equation, all=True)
>>> print(eqn)
x1 > -x2 + 5
x0 < -x1 + 10
x1 > x0*x2
>>> import mystic.constraints as mc
>>> c = ms.generate_constraint(ms.generate_solvers(eqn), join=mc.and_)
>>> c([1,2,3])
[1, 3.000000000000004, 3]
Mike McKerns
  • 33,715
  • 8
  • 119
  • 139
  • Hi Mike, thank you for your answer. If I understand correctly, then, defining a constraint as you did above will not require to run a nonlinear solver inside the main nonlinear solver (to satisfy the constraints). That was my main issue, as evaluating the objective function is an expensive operation - while the evaluation of the constraints as defined above is basically instantaneous. Did I understand correctly? – Infinity77 Jan 06 '19 at 09:36
  • Well... close, but not completely. The `as_constraint` method is the most general, but the drawback is that it's very slow, as it always runs an optimization at each function evaluation. The method I've provided in my answer is generally much faster, however it can still require an inner optimization. It tries a newly selected parameter vector first, then failing that, tweaks one or more parameters using a fast optimizer and some randomness, and does so iteratively until a valid point is found. If you want speed, use a penalty. If you want robustness, use constraints -- or mix and match. – Mike McKerns Jan 06 '19 at 16:15
  • Thanks Mike, I understand now. Once again I’ll stick with penalties then, I naively thought that the “constraints” would simply wipe out part of the search space: as things stands, it seems no matter what I do a hard constraint will require an inner optimization, which is almost unthinkable considering the complexity and time-requirements for my objective function. – Infinity77 Jan 06 '19 at 18:26
  • A constraint does wipe out the search space -- if you have an analytic function to do so, it's **really** fast. However, if you don't, the best thing that can be done is to have `mystic` maintain the constraints with a numerical mapping of the search space. – Mike McKerns Jan 06 '19 at 21:19