I'm trying to optimize function objective
.
full_constr_data
consists of 6 type of goals, each goal is divided by years, each year is represented by project-based data. So, I'm weighting full_constr_data
project-wise by argument x
of function objective
, e.g. full_constr_data[0][2][3] * x[3]
means that "goal #0, year #2, project #3 is weighted by x[3]
The results are stored in variable full_constr_data_weighted
.
The next step is project-wise sum up of full_constr_data_weighted
. Example, sum all projects for goal #0, year #2:
full_constr_data_weighted[0][2][0] + full_constr_data_weighted[0][2][1] + ... + full_constr_data_weighted[0][2][n]
where 'n' - total number of projects. The data is stored in variable full_sum
.
After that I'm calculating probabilities. I take quantiles from variable constr_mod
and based on the value calculate the probabilities of exceeding the quantile. constr_mod
and full_sum
have exactly the same structure, however for each goal # and following year # constr_mode
contains a single value, while full_sum
has vector of values (distribution). Calculated probabilities are stored to variable my_prob
.
Finally, I'm summing up all the probabilities in my_prob
. This sum I have to optimize: make it as large as possible (notice a minus sign in return
statement).
Optimization problem has a single inequality constraint: sum of vector obj
weighted by x
should be larger that 1000. Interpret obj
as NPV for each of the project.
I use diffev2
from package mystic
.
Variables full_constr_data
, constr_mod
, pen_mult
, obj
are stored in my_data.spydata file
: download via Google Drive
Unfortunately, optimization didn't converge:
Generation 11980 has ChiSquare: inf
Generation 11990 has ChiSquare: inf
Generation 12000 has ChiSquare: inf
STOP("EvaluationLimits with {'evaluations': 1200000, 'generations': 12000}")
Any suggestions how to solve this non-convex problem?
import numpy as np
from mystic.solvers import diffev2
from mystic.monitors import VerboseMonitor
import mystic.symbolic as ms
def objective(x):
# 'full_constr_data' weighted by argument 'x'
full_constr_data_weighted = []
for i in range(len(full_constr_data)):
temp = []
for k in range(len(full_constr_data[i])):
temp.append( [ full_constr_data[i][k][m] * x[m] \
for m in range(len(full_constr_data[i][k])) ] )
full_constr_data_weighted.append(temp)
# Project-wise sum of weighted data
full_sum = []
for i in range(len(full_constr_data_weighted)):
temp = []
for j in range(len(full_constr_data_weighted[i])):
temp2 = np.array( [ 0 for m in range(len(full_constr_data_weighted[i][j][0])) ] )
for k in range(len(full_constr_data_weighted[i][j])):
temp2 = temp2 + full_constr_data_weighted[i][j][k]
temp.append(temp2)
full_sum.append(temp)
# Probability calculation
my_prob = []
for i in range(len(full_sum)):
temp = []
for j in range(len(full_sum[i])):
temp.append(sum(full_sum[i][j] > constr_mod[i][j]) / len(full_sum[i][j]))
my_prob.append(np.array(temp))
# Probability data weighted by 'pen_mult'
my_prob_uweighted = list(np.array(my_prob) * np.array(pen_mult))
# Sum of all weighted probability data (function to maximize)
sum_prob = sum([sum(my_prob_uweighted[i]) for i in range(len(my_prob_uweighted))])
return -sum_prob
# Inequality constraint
equation = 'x0*{0} + x1*{1} + x2*{2} + x3*{3} + x4*{4} + x5*{5} + x6*{6} + x7*{7} + x8*{8} + x9*{9} + x10*{10} + x11*{11} + x12*{12} + x13*{13} + x14*{14} + x15*{15} + x16*{16} + x17*{17} + x18*{18} + x19*{19} + x20*{20} + x21*{21} + x22*{22} + x23*{23} + x24*{24} + x25*{25} + x26*{26} + x27*{27} + x28*{28} + x29*{29} >= {30}'\
.format(obj[0], obj[1], obj[2], obj[3], obj[4], obj[5], obj[6], obj[7], obj[8], obj[9],
obj[10], obj[11], obj[12], obj[13], obj[14], obj[15], obj[16], obj[17], obj[18], obj[19],
obj[20], obj[21], obj[22], obj[23], obj[24], obj[25], obj[26], obj[27], obj[28], obj[29],
1000)
cf = ms.generate_constraint(ms.generate_solvers(ms.simplify(equation)))
bounds = [(0,1)]*30
mon = VerboseMonitor(10)
result = diffev2(objective, x0=bounds, bounds=bounds, constraints=cf, \
npop=40, gtol=200, disp=False, full_output=True, itermon=mon)