-1

I would like to put an upper limit on the sum of abs(w) in a scipy optimization problem. This can be done in a linear program by using dummy variables, e.g. y > w, y > -w, sum(y) < K, but I cannot figure out how to formulate it in the scipy optimize framework.

Code example is below. This runs but the total portfolio gross is not fixed. This is a long/short portfolio optimization where the w's sum to zero, and I want abs(w) to sum to 1.0. Is there a way to add this second constraint in scipy's framework ?

import numpy as np
import scipy.optimize as sco

def optimize(alphas, cov, maxRisk):
    def _calcRisk(w):
        var = np.dot(np.dot(w.T, cov), w)
        return(var)
    def _calcAlpha(w):
        alpha = np.dot(alphas, w)
        return(-alpha)
    constraints = (
            {'type': 'eq', 'fun': lambda w:  np.sum(w)},
            {'type': 'ineq', 'fun': lambda w: maxRisk*maxRisk - _calcRisk(w)} )
    n = len(alphas)
    bounds = tuple((-1, 1) for x in range(n))
    initw = n * [0.00001 / n]
    result = sco.minimize(_calcAlpha, initw, method='SLSQP',
                       bounds=bounds, constraints=constraints)
    return(result)
Henry
  • 200
  • 8
  • Is short selling allowed in your setting? Otherwise, consider changing `bounds = tuple((-1, 1) for x in range(n))` to `bounds = tuple((0, 1) for x in range(n))`. Otherwise, you allow negative weights in certain assets. My answer holds in any case, though. – 7shoe Aug 13 '22 at 05:07
  • You should be careful here: The constraint abs(w) = 1 is not differentiable at w = 0, which could lead to odd results as soon as one element of w gets close to zero during the optimization. – joni Aug 13 '22 at 08:33

2 Answers2

-1

A simple algebraic trick will do. Since equality constraints tacitly mean that the constraint function result is to be zero, you just shift the function's output by 1.0. Since np.sum(w)-1.0=0.0 is equivalent to np.sum(w)=1.0. See the documentation on scipy.optimize.minimize. In turn, just change the line

{'type': 'eq', 'fun': lambda w:  np.sum(w)},

to

{'type': 'eq', 'fun': lambda w:  np.sum(w) - 1.0}
7shoe
  • 1,438
  • 1
  • 8
  • 12
-1

Thanks to folks who responded. The answer is to make the free variable vector bigger, and slice from it to get the variables as needed (obvious I guess :-). The following works (use at your own risk of course):

import numpy as np
import scipy.optimize as sco

# make the required lambda function "final" so it does not change when param i (or n) changes
def makeFinalLambda(i, n, op):
    if op == '+':
        return(lambda w:  w[n+i] + w[i])
    else:
        return(lambda w:  w[n+i] - w[i])    

def optimize(alphas, cov, maxRisk):
    n = len(alphas)
    def _calcRisk(x):
        w = x[:n]
        var = np.dot(np.dot(w.T, cov), w)
        return(var)
    def _calcAlpha(x):
        w = x[:n]
        alpha = np.dot(alphas, w)
        return(-alpha)

    constraints = []
    # make the constraints to create abs value variables 
    for i in range(n):
        # note that this doesn't work; all the functions will refer to current i value
        # constraints.append({'type': 'ineq', 'fun': lambda w:  w[n+i] - w[i] })
        # constraints.append({'type': 'ineq', 'fun': lambda w:  w[n+i] + w[i] })
        constraints.append({'type': 'ineq', 'fun': makeFinalLambda(i, n, '-') })
        constraints.append({'type': 'ineq', 'fun': makeFinalLambda(i, n, '+') })
    # add neutrality, gross value, and risk constraints
    constraints = constraints + \
        [{'type': 'eq', 'fun': lambda w:  np.sum(w[:n]) },
         {'type': 'eq', 'fun': lambda w:  np.sum(w[n:]) - 1.0 },
         {'type': 'ineq', 'fun': lambda w: maxRisk*maxRisk - _calcRisk(w)}]
    
    bounds = tuple((-1, 1) for x in range(n))
    bounds = bounds + tuple((0, 1) for x in range(n))
    # try to choose a nice, feasible starting vector
    initw = n * [0.001 / n]
    initw = initw + [abs(w)+0.001 for w in initw]
    result = sco.minimize(_calcAlpha, initw, method='SLSQP',
                       bounds=bounds, constraints=constraints)
    return(result)

This iteratively creates 2 constraints for each weight variable to compute the absolute value variables. It's nicer to do this as a vector (per-element) constraint, as follows:

def optimize(alphas, cov, maxRisk):
    n = len(alphas)
    def _calcRisk(x):
        w = x[:n]
        var = np.dot(np.dot(w.T, cov), w)
        return(var)
    def _calcAlpha(x):
        w = x[:n]
        alpha = np.dot(alphas, w)
        return(-alpha)
    absfunpos = lambda x : [x[n+i] - x[i] for i in range(n)] 
    absfunneg = lambda x : [x[n+i] + x[i] for i in range(n)] 
    constraints = (
            sco.NonlinearConstraint(absfunpos, [0.0]*n, [2.0]*n),
            sco.NonlinearConstraint(absfunneg, [0.0]*n, [2.0]*n),
            {'type': 'eq', 'fun': lambda w:  np.sum(w[:n]) },
            {'type': 'eq', 'fun': lambda w:  np.sum(w[n:]) - 1.0 },
            {'type': 'ineq', 'fun': lambda w: maxRisk*maxRisk - _calcRisk(w) } )
    bounds = tuple((-1, 1) for x in range(n))
    bounds = bounds + tuple((0, 3) for x in range(n))
    initw = n * [0.01 / n]
    initw = initw + [abs(w) for w in initw]
    result = sco.minimize(_calcAlpha, initw, method='SLSQP',
                       bounds=bounds, constraints=constraints)
    return(result)
Henry
  • 200
  • 8