0

I am currently using Chemical Reaction Network Theory (CRNT) to study bistability in insulin signaling. Using CRNT I come to a point where I need to solve a global optimization problem with constraints to obtain saddle nodes. I have looked at multiple algorithms for the solution of my problem, however, since my problem is nonlinear and non-convex with the possibility of being multi-modal, I found that only a handful of methods are appropriate. I found that Differential Evolution (DE) seems the most appropriate to begin with. Since I don't have expertise in the field of optimization, I was looking for a library in python that could act like a sort of black-box for my objective function and constraints. After a quick search, I found that Mystic provides a function for DE that looks fairly simple to use. However, when I implement the DE function, I obtain values for my parameters that are outside of the bounds that I have prescribed.

I have already implemented the DE function on a very simple problem and I obtain great results. In addition to this, I have also tried large values for npop,gtol, and maxiter. A value of around 5000 for npop provides values that are close to the range I want, but there are some values that are still not in the range I have given (possibly a very large npop value will give me the results I desire). Nothing seems to fix this issue of parameter values being outside the bounds that I specified. Below is the exact code that I am running.

import mystic
from mystic.penalty import quadratic_equality
from mystic.solvers import diffev2
from mystic.constraints import as_constraint

def objective(x):
    k15,k51,k16,k82,k28,k27,k93,k39,k34,a1,a2,c4 = x
    c1 = -a2*(k34 + k39)/(c4*k34*k93)
    c2 = c4*k34*k93*(a1 - a2)*(k15 + k16)/(a2*k16*k51*(k34 + k39))
    c3 = -a1*(k27 + k28)/(c4*k27*k82)
    c5 = -(a1 - a2)/k16
    c6 = -a1/k27
    c7 = -a2/k34
    return 1e10*((1.0*c1**2*c4*k34*k51*k82*k93 + 1.0*c1**2*k27*k34*k51*k93 + 1.0*c1**2*k28*k34*k51*k93 - 2.0*c1*c2*c4*k16*k51*k82*k93 +
     1.0*c1*c2*c4*k34*k51*k82*k93 - 2.0*c1*c2*k16*k27*k51*k93 - 2.0*c1*c2*k16*k28*k51*k93 + 1.0*c1*c2*k27*k34*k51*k93 + 1.0*c1*c2*k28*k34*k51*k93 -
     2.0*c1*c3*c4*k27*k51*k82*k93 - 1.0*c1*c3*c4*k34*k51*k82*k93 - 1.0*c1*c3*k27*k34*k51*k82 - 1.0*c1*c3*k27*k39*k51*k82 - 1.0*c1*c4**2*k34*k51*k82*k93 +
     1.0*c1*c4*k15*k34*k82*k93 + 1.0*c1*c4*k16*k34*k82*k93 - 1.0*c1*c4*k27*k34*k51*k93 - 1.0*c1*c4*k28*k34*k51*k93 + 1.0*c1*k15*k27*k34*k93 + 1.0*c1*k15*k28*k34*k93 +
     1.0*c1*k16*k27*k34*k93 + 1.0*c1*k16*k28*k34*k93 - 1.0*c2*c3*k16*k34*k51*k82 - 1.0*c2*c3*k16*k39*k51*k82 - 1.0*c2*c3*k27*k34*k51*k82 - 1.0*c2*c3*k27*k39*k51*k82 -
     1.0*c2*c4*k16*k34*k51*k82 - 1.0*c2*c4*k16*k39*k51*k82 - 1.0*c2*k16*k27*k34*k51 - 1.0*c2*k16*k27*k39*k51 - 1.0*c2*k16*k28*k34*k51 - 1.0*c2*k16*k28*k39*k51 -
     2.0*c3*c4*k15*k27*k82*k93 - 1.0*c3*c4*k15*k34*k82*k93 - 2.0*c3*c4*k16*k27*k82*k93 - 1.0*c3*c4*k16*k34*k82*k93 - 1.0*c3*k15*k27*k34*k82 - 1.0*c3*k15*k27*k39*k82 -
     1.0*c3*k16*k27*k34*k82 - 1.0*c3*k16*k27*k39*k82 - 1.0*c4**2*k15*k34*k82*k93 - 1.0*c4**2*k16*k34*k82*k93 - 1.0*c4*k15*k27*k34*k93 - 1.0*c4*k15*k28*k34*k93 -
     1.0*c4*k16*k27*k34*k93 - 1.0*c4*k16*k28*k34*k93)**2/(k16**2*k27**2*k34**2*k51**2*k82**2*k93**2))

#c1
def penalty1(x):
    k15,k51,k16,k82,k28,k27,k93,k39,k34,a1,a2,c4 = x
    return -a2*(k34 + k39)/(c4*k34*k93)
#c2
def penalty2(x):
    k15,k51,k16,k82,k28,k27,k93,k39,k34,a1,a2,c4 = x
    return c4*k34*k93*(a1 - a2)*(k15 + k16)/(a2*k16*k51*(k34 + k39))

#c3
def penalty3(x):
    k15,k51,k16,k82,k28,k27,k93,k39,k34,a1,a2,c4 = x
    return -a1*(k27 + k28)/(c4*k27*k82)

#c5
def penalty4(x):
    k15,k51,k16,k82,k28,k27,k93,k39,k34,a1,a2,c4 = x
    return -(a1 - a2)/k16

#c6
def penalty5(x):
    k15,k51,k16,k82,k28,k27,k93,k39,k34,a1,a2,c4 = x
    return -a1/k27

#c7
def penalty6(x):
    k15,k51,k16,k82,k28,k27,k93,k39,k34,a1,a2,c4 = x
    return -a2/k34

@quadratic_equality(penalty1, k=1e12)
@quadratic_equality(penalty2, k=1e12)
@quadratic_equality(penalty3, k=1e12)
@quadratic_equality(penalty4, k=1e12)
@quadratic_equality(penalty5, k=1e12)
@quadratic_equality(penalty6, k=1e12)
def penalty(x):
    return 0.0

solver = as_constraint(penalty)

b1 = (1e-1,1e2)
b2 = (1e-1,1e2)
b3 = (1e-1,1e2)
b4 = (1e-1,1e2)
b5 = (1e-1,1e2)
b6 = (1e-1,1e2)
b7 = (1e-1,1e2)
b8 = (1e-1,1e2)
b9 = (1e-1,1e2)
b10 = (-1e-1,0)
b11 = (-1e-1,0)
b12 = (1e-1,1e2)


bounds = [b1,b2,b3,b4,b5,b6,b7,b8,b9,b10,b11,b12]

#npop should be at least 10*dim, where dim is the number of parameters
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=penalty, npop=120, disp=False, full_output=True, gtol=100, maxiter=6000)

print("")
print("The minimized objection function value:")
print(result[1])
print("")
print("Parameter values:")
print(result[0])

Running this, I obtain the following output.

The minimized objection function value 1.216082506137729

Parameter values [1.07383892e-01 9.99893116e+01 8.88912946e+01 9.99859090e+01 1.09022526e-01 9.99587677e+01 9.70349805e+01 1.23842240e+01 4.72484236e+00 -1.01491728e-08 -1.01491720e-08 1.00002390e-01]

as you can see, the vector given for the parameter values provides values of -1.01491728e-08 and -1.01491720e-08, which should be in the range (-0.1,0).

Have I simply implemented or misinterpreted something incorrectly in Mystic, or do I need to explore a different algorithm for the solution of my optimization problem? If you suggest that a different algorithm would be better, do you suggest I use Scatter Search (SS)? Also, does Mystic provide functions that can perform SS, or would I need to implement it myself?

Any help would be appreciated.

  • Also note that I'm unable to reproduce your results, and I'm not seeing solutions outside the bounds. – Mike McKerns Jan 26 '19 at 14:03
  • Thank you for the quick response Mike McKerns. I made the very embarrassing mistake of thinking -1.01491728e-08 was smaller than -0.1. You are correct, everything seems to be working correctly with respect to the bounds. As for the constraints, doesn't solver = as_constraint(penalty) make the penalty a constraint for the DE algorithm, if not, what is this doing? I will check out the mystic.ensemble solvers, thank you for the suggestion. – B. Reyes Jan 28 '19 at 17:22
  • `as_constraint` does do that... but it's very inefficient, and there are much more efficient ways to do the same thing. – Mike McKerns Jan 29 '19 at 01:52
  • I see, thank you for your help. – B. Reyes Jan 29 '19 at 16:07

1 Answers1

1

I'm the mystic author. First thing to note is that you are using penalty functions. They don't restrict the search space for an optimizer, they add a penalty to the objective that "encourages" it to not find solutions in the space were the penalty is nonzero. So that's the first thing to note. If you want to restrict the solutions to only the regions of space where the constraints are valid, use constraints instead.

However, this doesn't address why your bounds weren't respected. They should be. There are reasons why they might not be, such as you didn't find any good solutions during the course of the optimization. It's well known that DE can perform poorly for large numbers of parameters with tight bounds. I'd attach a monitor, so you can see how the parameters change over time. And, in that case, if you see all the parameters go from "valid" to some outside the bounds, then you've found a bug, and please report it on GitHub.

Even in the latter case, you could enforce the constraints even more strongly, by adding a constraint object that uses a sequence of inequalities to ensure the parameters are within your selected range... however, I suspect something is going on that can be discerned by attaching a VerboseMonitor.

mystic does provide something similar to a scatter search algorithm. Check out the mystic.ensemble solvers.

UPDATE: showing results I'm consistently getting

>>> mon = mystic.monitors.VerboseMonitor()
>>> result = diffev2(objective, x0=bounds, bounds=bounds, penalty=penalty, npop=40, disp=False, full_output=True, gtol=100, maxiter=6000, itermon=mon)
Generation 0 has ChiSquare: 2070417340.327223
Generation 10 has ChiSquare: 2070417340.327223
Generation 20 has ChiSquare: 1504378818.925922
Generation 30 has ChiSquare: 49226009.999661
Generation 40 has ChiSquare: 49226009.999661
Generation 50 has ChiSquare: 49226009.999661
Generation 60 has ChiSquare: 49226009.999661
Generation 70 has ChiSquare: 26549433.119812
Generation 80 has ChiSquare: 15513978.070527
Generation 90 has ChiSquare: 2637293.216637
Generation 100 has ChiSquare: 2466027.220625
Generation 110 has ChiSquare: 857082.416445
Generation 120 has ChiSquare: 409397.900243
Generation 130 has ChiSquare: 61023.902060
Generation 140 has ChiSquare: 61023.902060
Generation 150 has ChiSquare: 34911.899051
Generation 160 has ChiSquare: 22564.321601
Generation 170 has ChiSquare: 3078.667678
Generation 180 has ChiSquare: 3078.667678
Generation 190 has ChiSquare: 1233.029461
Generation 200 has ChiSquare: 1233.029461
Generation 210 has ChiSquare: 161.823695
Generation 220 has ChiSquare: 43.540529
Generation 230 has ChiSquare: 42.662921
Generation 240 has ChiSquare: 16.988486
Generation 250 has ChiSquare: 16.988486
Generation 260 has ChiSquare: 16.988486
Generation 270 has ChiSquare: 8.237803
Generation 280 has ChiSquare: 8.237803
Generation 290 has ChiSquare: 5.994087
Generation 300 has ChiSquare: 5.597002
Generation 310 has ChiSquare: 5.597002
Generation 320 has ChiSquare: 4.998805
Generation 330 has ChiSquare: 4.722383
Generation 340 has ChiSquare: 4.544368
Generation 350 has ChiSquare: 4.544368
Generation 360 has ChiSquare: 4.332436
Generation 370 has ChiSquare: 3.711041
Generation 380 has ChiSquare: 1.618530
Generation 390 has ChiSquare: 1.342488
Generation 400 has ChiSquare: 1.279087
Generation 410 has ChiSquare: 1.266669
Generation 420 has ChiSquare: 1.233121
Generation 430 has ChiSquare: 1.225398
Generation 440 has ChiSquare: 1.225398
Generation 450 has ChiSquare: 1.224930
Generation 460 has ChiSquare: 1.220611
Generation 470 has ChiSquare: 1.220611
Generation 480 has ChiSquare: 1.219702
Generation 490 has ChiSquare: 1.217958
Generation 500 has ChiSquare: 1.217265
Generation 510 has ChiSquare: 1.217095
Generation 520 has ChiSquare: 1.216879
Generation 530 has ChiSquare: 1.216421
Generation 540 has ChiSquare: 1.216096
Generation 550 has ChiSquare: 1.215315
Generation 560 has ChiSquare: 1.215274
Generation 570 has ChiSquare: 1.215125
STOP("ChangeOverGeneration with {'tolerance': 0.005, 'generations': 100}")
>>> lb,ub = zip(*bounds)
>>> result[0] < ub
array([ True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True])
>>> result[0] > lb
array([ True,  True,  True,  True,  True,  True,  True,  True,  True,
        True,  True,  True])
>>> 
Mike McKerns
  • 33,715
  • 8
  • 119
  • 139