I'm quite new to Python; I'm trying to put together an portfolio allocation optimiser which essentially minimises portfolio risk given a target portfolio return level, by varying the allocation vector.
For this I'm using scipy.optimize.minimize
. At the moment it works fine if I only use one constraint at a time, for example:
def total_constr_fun(allocation):
return numpy.sum(allocation) - 1
total_constr = {'type':'eq', 'fun': total_constr_fun}
optimalallocation = optimize.minimize(risk_fctn,starting_allocation,method='SLSQP',bounds=bds,constraints=total_constr)
where the bounds are 0% to 100% for each element in allocation.
risk_fctn
is essentially a standard deviation function, with allocation as the only argument. The output of this is a scalar.
However if I try using another constraint as well (which also works by itself), say:
target_value = inputs['target_value'] #i.e. target value is a scalar which is defined earlier in the script
def target_constraint(allocation):
return return_fctn(allocation) - target_value
cons = [{'type':'eq', 'fun': target_constraint},
{'type':'eq', 'fun': total_constr_fun}]
optimize.minimize(risk_fctn,starting_allocation,method='SLSQP',bounds=bds,constraints=cons)
then I get the error 'Singular matrix C in LSQ subproblem'. Same if I use round brackets instead of square, for cons.
Strangely it works if I do this:
constraint = {'type':'eq', 'fun': lambda allocation: [numpy.sum(allocation) - 1, return_fctn(allocation) - target_value]}
optimize.minimize(risk_fctn,starting_allocation,method='SLSQP',bounds=bds,constraints=constraint)
I would use this (since it works) but I also need to add some inequality constraints so clearly I can't fudge it that way.
Any help/explanation at all would be very much appreciated!