6

I am trying to perform a portfolio optimization that returns the weights which maximize my utility function. I can do this portion just fine including the constraint that weights sum to one and that the weights also give me a target risk. I have also included bounds for [0 <= weights <= 1]. This code looks as follows:

def rebalance(PortValue, port_rets, risk_tgt):
    #convert continuously compounded returns to simple returns
    Rt = np.exp(port_rets) - 1 
    covar = Rt.cov()

    def fitness(W):
        port_Rt = np.dot(Rt, W)
        port_rt = np.log(1 + port_Rt)
        q95 = Series(port_rt).quantile(.05)
        cVaR = (port_rt[port_rt < q95] * sqrt(20)).mean() * PortValue
        mean_cVaR = (PortValue * (port_rt.mean() * 20)) / cVaR
        return -1 * mean_cVaR

    def solve_weights(W):
        import scipy.optimize as opt
        b_ = [(0.0, 1.0) for i in Rt.columns]
        c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1},
              {'type':'eq', 'fun': lambda W: sqrt(np.dot(W, np.dot(covar, W))\
                                                          * 252) - risk_tgt})
        optimized = opt.minimize(fitness, W, method='SLSQP', constraints=c_, bounds=b_)  

        if not optimized.success: 
           raise BaseException(optimized.message)
        return optimized.x  # Return optimized weights


    init_weights = Rt.ix[1].copy()
    init_weights.ix[:] = np.ones(len(Rt.columns)) / len(Rt.columns)

    return solve_weights(init_weights)

Now I can delve into the problem, I have my weights stored in a MultIndex pandas Series such that each asset is an ETF corresponding to an asset class. When an equally weights portfolio is printed out looks like this:

Out[263]:
equity       CZA     0.045455
             IWM     0.045455
             SPY     0.045455
intl_equity  EWA     0.045455
             EWO     0.045455
             IEV     0.045455
bond         IEF     0.045455
             SHY     0.045455
             TLT     0.045455
intl_bond    BWX     0.045455
             BWZ     0.045455
             IGOV    0.045455
commodity    DBA     0.045455
             DBB     0.045455
             DBE     0.045455
pe           ARCC    0.045455
             BX      0.045455
             PSP     0.045455
hf           DXJ     0.045455
             SRV     0.045455
cash         BIL     0.045455
             GSY     0.045455
Name: 2009-05-15 00:00:00, dtype: float64

how can I include an additional bounds requirement such that when I group this data together, the sum of the weight falls between the allocation ranges I have predetermined for that asset class?

So concretely, I want to include an additional boundary such that

init_weights.groupby(level=0, axis=0).sum()
Out[264]:
equity         0.136364
intl_equity    0.136364
bond           0.136364
intl_bond      0.136364
commodity      0.136364
pe             0.136364
hf             0.090909
cash           0.090909
dtype: float64

is within these bounds

[(.08,.51), (.05,.21), (.05,.41), (.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)]

[UPDATE] I figured I would show my progress with a clumsy psuedo-solution that I am not too happy with. Namely becuase it doesn't solve the weights using the entire data set but rather asset class by asset class. The other issue is that it instead returns the series rather than the weights, But I am sure someone more apt than myself, could offer some insight in regards to the groupby function.

So with a mild tweak to my initial code, I have:

PortValue = 100000
model = DataFrame(np.array([.08,.12,.05,.05,.65,0,0,.05]), index= port_idx, columns = ['strategic'])
model['tactical'] = [(.08,.51), (.05,.21),(.05,.41),(.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)]


def fitness(W, Rt):
    port_Rt = np.dot(Rt, W)
    port_rt = np.log(1 + port_Rt)
    q95 = Series(port_rt).quantile(.05)
    cVaR = (port_rt[port_rt < q95] * sqrt(20)).mean() * PortValue
    mean_cVaR = (PortValue * (port_rt.mean() * 20)) / cVaR
    return -1 * mean_cVaR  

def solve_weights(Rt, b_= None):
    import scipy.optimize as opt
    if b_ is None:
       b_ = [(0.0, 1.0) for i in Rt.columns]
    W = np.ones(len(Rt.columns))/len(Rt.columns)
    c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1})
    optimized = opt.minimize(fitness, W, args=[Rt], method='SLSQP', constraints=c_, bounds=b_)

    if not optimized.success: 
        raise ValueError(optimized.message)
    return optimized.x  # Return optimized weights

The following one-liner will return the somewhat optimized series

port = np.dot(port_rets.groupby(level=0, axis=1).agg(lambda x: np.dot(x,solve_weights(x))),\ 
solve_weights(port_rets.groupby(level=0, axis=1).agg(lambda x: np.dot(x,solve_weights(x))), \
list(model['tactical'].values)))

Series(port, name='portfolio').cumsum().plot()

enter image description here

[Update 2]

The following changes will return the constrained weights, though still not optimal as it is broken down and optimized on the constituent asset classes, so when the constraint for target risk is considered only a collapsed version of the initial covariance matrix is avaliable

def solve_weights(Rt, b_ = None):

    W = np.ones(len(Rt.columns)) / len(Rt.columns)
    if b_ is None:
        b_ = [(0.01, 1.0) for i in Rt.columns]
        c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1})
    else:
        covar = Rt.cov()
        c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1},
              {'type':'eq', 'fun': lambda W: sqrt(np.dot(W, np.dot(covar, W)) * 252) - risk_tgt})

    optimized = opt.minimize(fitness, W, args = [Rt], method='SLSQP', constraints=c_, bounds=b_)  

    if not optimized.success: 
        raise ValueError(optimized.message)

    return optimized.x  # Return optimized weights

class_cont = Rt.ix[0].copy()
class_cont.ix[:] = np.around(np.hstack(Rt.groupby(axis=1, level=0).apply(solve_weights).values),3)
scalars = class_cont.groupby(level=0).sum()
scalars.ix[:] = np.around(solve_weights((class_cont * port_rets).groupby(level=0, axis=1).sum(), list(model['tactical'].values)),3)

return class_cont.groupby(level=0).transform(lambda x: x * scalars[x.name])
Brandon Ogle
  • 715
  • 1
  • 8
  • 23
  • Not sure I understand what's the question. Do you want to check if the value is in the provided bounds and return a boolean True/False array, or you want to represent your values in categories represented by those bounds? – Viktor Kerkez Aug 13 '13 at 20:25
  • I am trying to use scipy.opt to give me the weights back that conform to individually being bound to (0,1), then also have the grouped class weights to be bound by the limits I showed above. – Brandon Ogle Aug 13 '13 at 20:32
  • For the first part of the question: aren't the waits already bounded? You provided the `bounds` parameter correctly to the `minimize` method, so it should return a list of bounded weights? – Viktor Kerkez Aug 13 '13 at 20:48
  • yes all the weights are bounded between (0,1). Now I want to add the second bondary – Brandon Ogle Aug 13 '13 at 21:03
  • for purposes of illustration, suppose I were to do this with constraints I would have in psuedo code: c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1}, {'type':'eq', 'fun': lambda W: .08 <= sum(equity weights) <= .51}, {'type':'eq', 'fun': lambda W: .05 <= sum(intl_equity weights) <= .21}, etc..... – Brandon Ogle Aug 13 '13 at 21:25
  • This isn't germane to your question (really at all), but it's generally not a good idea to raise `BaseException` or even `Exception`, better to either subclass it with a more informative name or use a builtin. In your case you could probably use `ValueError`. – Phillip Cloud Aug 13 '13 at 21:51

2 Answers2

3

Not totally sure I understand, but I think you can add the following as another constraint:

def w_opt(W):
    def filterer(x):
        v = x.range.values
        tp = v[0]
        lower, upper = tp
        return lower <= x[column_name].sum() <= upper
    return not W.groupby(level=0, axis=0).filter(filterer).empty

c_ = {'type': 'eq', 'fun': w_opt}  # add this to your other constraints

where x.range is the interval (tuple) repeated K[i] times where K is the number of times a particular level occurs and i is the ith level. column_name in your case happens to be a date.

This says constrain the weights such that the sum of the weights in the ith group is between the associated tuple interval.

To map each of the level names to an interval do this:

intervals = [(.08,.51), (.05,.21), (.05,.41), (.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)]
names = ['equity', 'intl_equity', 'bond', 'intl_bond', 'commodity', 'pe', 'hf', 'cash']

mapper = Series(zip(names, intervals))
fully_mapped = mapper[init_weights.get_level_values(0)]
original_dataset['range'] = fully_mapped.values
Phillip Cloud
  • 24,919
  • 11
  • 68
  • 88
  • I am getting the error AttributeError: 'SeriesGroupBy' object has no attribute 'filter' I am on pandas 0.11 – Brandon Ogle Aug 13 '13 at 21:41
  • The `filter` method on `GroupBy` objects was introduced by commit 2a2cfb83582ece7690c94457cc5a6f0835049f5c, to use it you need to upgrade to pandas 0.12. I'll see if I can work out an equivalent solution without using `filter` – Phillip Cloud Aug 13 '13 at 21:43
  • But if I understand your code correctly it tests to see if the grouped level is within my bounds, which is exactly what I want. I am not familiar with 'filter' however and I am not sure where I am to pass in my bounds? As you aren't passing it in here `W.groupby(level=0, axis=0).filter(filterer).empty`? – Brandon Ogle Aug 13 '13 at 21:44
  • Add the dictionary, `c_` that I defined above to your `tuple` of `dict`s that you define above in `solve_weights`. – Phillip Cloud Aug 13 '13 at 21:46
  • Oh I see the issue, I misinterpreted your last comment. What I did was assume that you have a column of `tuple`s called `range` that contains the interval for that particular group. Alternatively, and probably better is to have two columns, say, `lower` and `upper` that are the lower and upper endpoints of your interval, respectively. – Phillip Cloud Aug 13 '13 at 21:49
  • Sorry, I should have been a little less vague. After adding the new equality function to `c_`, where do I pass in these `[(.08,.51), (.05,.21), (.05,.41), (.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)]`. Or does your function just get that from the namespace? I am going to read up on the filter method and look into updating – Brandon Ogle Aug 13 '13 at 21:52
  • Map each of your level names to an interval and add that to your original dataset as a column named `range`. Does that make sense? I'll add something to my answer to illustrate this. – Phillip Cloud Aug 13 '13 at 22:00
  • I could resolve this *much* faster if you posted a small subset of your data, is that possible? – Phillip Cloud Aug 13 '13 at 22:33
  • if you give me an email I will send you the .pynb – Brandon Ogle Aug 13 '13 at 22:38
  • I have made some progress in the right direction, take a look at the update in my original post – Brandon Ogle Aug 14 '13 at 18:39
2

After much time this seems to be the only solution that fits...

def solve_weights(Rt, b_ = None):

    W = np.ones(len(Rt.columns)) / len(Rt.columns)
    if  b_ is None:
        b_ = [(0.01, 1.0) for i in Rt.columns]
        c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1})
    else:
        covar = Rt.cov()
        c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1},
              {'type':'eq', 'fun': lambda W: sqrt(np.dot(W, np.dot(covar, W)) * 252) - risk_tgt})

    optimized = opt.minimize(fitness, W, args = [Rt], method='SLSQP', constraints=c_, bounds=b_)  

    if not optimized.success: 
        raise ValueError(optimized.message)

   return optimized.x  # Return optimized weights

class_cont = Rt.ix[0].copy()
class_cont.ix[:] = np.around(np.hstack(Rt.groupby(axis=1, level=0).apply(solve_weights).values),3)
scalars = class_cont.groupby(level=0).sum()
scalars.ix[:] = np.around(solve_weights((class_cont * port_rets).groupby(level=0, axis=1).sum(), list(model['tactical'].values)),3)

class_cont.groupby(level=0).transform(lambda x: x * scalars[x.name])
Brandon Ogle
  • 715
  • 1
  • 8
  • 23