22

I need some help regarding optimisation functions in python(scipy) the problem is optimizing f(x) where x=[a,b,c...n]. the constraints are that values of a,b etc should be between 0 and 1, and sum(x)==1. The scipy.optimise.minimize function seems best as it requires no differential. How do I pass the arguments?

Creating an ndarray using permutation is too long. My present code as below:-

import itertools as iter
all=iter.permutations([0.0,.1,.2,.3,.4,.5,.6,.7,.8,.9,1.0],6) if sum==1
all_legal=[]
for i in all:
if np.sum(i)==1:
    #print np.sum(i)
    all_legal.append(i)
print len(all_legal)
lmax=0
sharpeMax=0
for i in all_legal:
    if sharpeMax<getSharpe(i):
        sharpeMax=getSharpe(i)
        lmax=i
Daniel
  • 19,179
  • 7
  • 60
  • 74
anand
  • 269
  • 2
  • 3
  • 9
  • 6
    Aside: [`iter`](http://docs.python.org/2/library/functions.html#iter) is the name of a built-in function, so it's not a good abbreviation for `itertools`. – DSM Sep 12 '13 at 14:57
  • 1
    It's not immediately clear what you want to optimize. Can you describe the problem in a bit more detail? – Pascal Bugnion Sep 12 '13 at 15:05
  • 2
    I heavily recommend reading the chapter about [Optimization](http://scipy-lectures.github.io/advanced/mathematical_optimization/) in the Scipy lectures. Well worth a read. – F.X. Sep 12 '13 at 15:13

3 Answers3

34

You can do a constrained optimization with COBYLA or SLSQP as it says in the docs.

from scipy.optimize import minimize

start_pos = np.ones(6)*(1/6.) #or whatever

#Says one minus the sum of all variables must be zero
cons = ({'type': 'eq', 'fun': lambda x:  1 - sum(x)})

#Required to have non negative values
bnds = tuple((0,1) for x in start_pos)

Combine these into the minimization function.

res = minimize(getSharpe, start_pos, method='SLSQP', bounds=bnds ,constraints=cons)
Daniel
  • 19,179
  • 7
  • 60
  • 74
7

Check .minimize docstring:

scipy.optimize.minimize(fun, x0, args=(), method='BFGS', jac=None, hess=None, hessp=None, \
              bounds=None, constraints=(), tol=None, callback=None, options=None)

What matters the most in your case will be the bounds. When you want to constrain your parameter in [0,1] (or (0,1)?) You need to define it for each variable, such as:

bounds=((0,1), (0,1).....)

Now, the other part, sum(x)==1. There may be more elegant ways to do it, but consider this: instead of minimizing f(x), you minimize h=lambda x: f(x)+g(x), a new function essential f(x)+g(x) where g(x) is a function reaches it minimum when sum(x)=1. Such as g=lambda x: (sum(x)-1)**2.

The minimum of h(x) is reached when both f(x) and g(x) are at their minimum. Sort of a case of Lagrange multiplier method http://en.wikipedia.org/wiki/Lagrange_multiplier

CT Zhu
  • 52,648
  • 17
  • 120
  • 133
1

Another way of weighting variables where the sum of the weights is constrained to equal 1, is to use minimize with no constraints, initialize with near-zero values but use a softmax in the scoring function.

import numpy as np
from scipy.special import softmax
from scipy.optimize import minimize

initial_weights = np.random.normal(scale=0.01, size=(n_weights))

def getSharpe(x):
    weights = softmax(x)
    ....
    # Sharpe ratio calculation
    ....
    return -score


# Any optimization method can now be used - we are not limited to SLSQP
res = minimize(getSharpe, initial_weights, method='L-BFGS-B')

Since any optimizer can be used, parallel algorithms can be used for a speed up e.g. optimparallel

Anjum Sayed
  • 872
  • 9
  • 20
  • 1
    I'm intrigued! So I've gone and asked [What problems does softmax() solve and when should I think of using it - in simple terms](https://scicomp.stackexchange.com/q/42148/17869) in SciComp SE. – uhoh Nov 12 '22 at 23:34