0

I am using cvxpy to do a simple portfolio optimization.

I implemented the following dummy code

from cvxpy import *
import numpy as np

np.random.seed(1)
n = 10

Sigma = np.random.randn(n, n) 
Sigma = Sigma.T.dot(Sigma)

orig_weight = [0.15,0.25,0.15,0.05,0.20,0,0.1,0,0.1,0]
w = Variable(n)

mu = np.abs(np.random.randn(n, 1))
ret = mu.T*w

lambda_ = Parameter(sign='positive')
lambda_ = 5

risk = quad_form(w, Sigma)

constraints = [sum_entries(w) == 1, w >= 0, sum_entries(abs(w-orig_weight)) <= 0.750]

prob = Problem(Maximize(ret - lambda_ * risk), constraints)

prob.solve()

print 'Solver Status : ',prob.status

print('Weights opt :', w.value)

I am constraining on being fully invested, long only and to have a turnover of <= 75%. However I would like to use turnover as a "soft" constraint in the sense that the solver will use as little as possible but as much as necessary, currently the solver will almost fully max out turnover.

I basically want something like this which is convex and doesn't violate the DCP rules

sum_entries(abs(w-orig_weight)) >= 0.05

I would assume this should set a minimum threshold (5% here) and then use as much turnover until it finds a feasible solution.

I tried rewriting my objective function to

prob = Problem(Maximize(lambda_ * ret - risk - penalty * max(sum_entries(abs(w-orig_weight))+0.9,0))  , constraints)

where penalty is e.g. 2 and my constraint object still looks like

constraints = [sum_entries(w) == 1, w >= 0, sum_entries(abs(w-orig_weight)) <= 0.9]

I have never used soft-constraints and any explanation would be highly appreciated.

EDIT: Intermediate solution

from cvxpy import *
import numpy as np

np.random.seed(1)
n = 10

Sigma = np.random.randn(n, n) 
Sigma = Sigma.T.dot(Sigma)
w = Variable(n)

mu = np.abs(np.random.randn(n, 1))
ret = mu.T*w

risk = quad_form(w, Sigma)

orig_weight = [0.15,0.2,0.2,0.2,0.2,0.05,0.0,0.0,0.0,0.0]

min_weight = [0.35,0.0,0.0,0.0,0.0,0,0.0,0,0.0,0.0]
max_weight = [0.35,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]

lambda_ret = Parameter(sign='positive')
lambda_ret = 5

lambda_risk = Parameter(sign='positive')
lambda_risk = 1

penalty = Parameter(sign='positive')
penalty = 100

penalized = True

if penalized == True:
    print '-------------- RELAXED ------------------'
    constraints = [sum_entries(w) == 1, w >= 0, w >= min_weight, w <= max_weight]
    prob = Problem(Maximize(lambda_ * ret - lambda_ * risk - penalty * max_entries(sum_entries(abs(w-orig_weight)))-0.01), constraints)
else:
    print '--------------   HARD  ------------------'
    constraints = [sum_entries(w) == 1, w >= 0, w >= min_weight, w <= max_weight, sum_entries(abs(w-orig_weight)) <= 0.40]
    prob = Problem(Maximize(lambda_ret * ret - lambda_risk * risk ),constraints)

prob.solve()

print 'Solver Status : ',prob.status
print('Weights opt :', w.value)

all_in = []
for i in range(n):
    all_in.append(np.abs(w.value[i][0] - orig_weight[i]))

print 'Turnover : ', sum(all_in)

The above code will force a specific increase in weight for item[0], here +20%, in order to maintain the sum() =1 constraint that has to be offset by a -20% decrease, therefore I know it will need a minimum of 40% turnover to do that, if one runs the code with penalized = False the <= 0.4 have to be hardcoded, anything smaller than that will fail. The penalized = True case will find the minimum required turnover of 40% and solve the optimization. What I haven't figured out yet is how I can set a minimum threshold in the relaxed case, i.e. do at least 45% (or more if required).

I found some explanation around the problem here, in chapter 4.6 page 37.

Boyed Paper

ThatQuantDude
  • 759
  • 1
  • 9
  • 26

0 Answers0