The SLSQP method in scipy.optimize takes in constraints that can all be contained in a list. However, the type of program I'm writing doesn't have a set number of constraints that can just be hard-coded. These lists below are the indices for which the stated asset type appear in the "master" list of securities.
us_stock_indices = [0, 4, 12, 15, 19, 23]
intl_stock_indices = [1, 2, 18, 20, 21, 24]
us_bond_indices = [3, 5, 7, 8, 11, 16, 17]
intl_bond_indices = [14, 22]
alternative_indices = [6, 13]
hedge_fund_indices = [9, 10]
The constraints for this problem are mainly to keep the total percentage of each given asset type within a certain window (for example I only want the optimized percentage of hedge fund securities to make up 0 to 10 percent of the portfolio)
cons = [{'type': 'ineq', 'fun': lambda x: target_risk_level - .08}, {'type': 'eq', 'fun': lambda x: np.sum(x) - 1},
{'type': 'ineq', 'fun': lambda x: np.sum([x[i] for i in us_stock_indices]) - .17}, {'type': 'ineq', 'fun': lambda x: .37- np.sum([x[i] for i in us_stock_indices])},
{'type': 'ineq', 'fun': lambda x: np.sum([x[i] for i in intl_stock_indices]) - .08}, {'type': 'ineq', 'fun': lambda x: .28- np.sum([x[i] for i in intl_stock_indices])},
{'type': 'ineq', 'fun': lambda x: np.sum([x[i] for i in .35]) - us_bond_min}, {'type': 'ineq', 'fun': lambda x: .6- np.sum([x[i] for i in us_bond_indices])},
{'type': 'ineq', 'fun': lambda x: np.sum([x[i] for i in intl_bond_indices]) - .02}, {'type': 'ineq', 'fun': lambda x: .2 - np.sum([x[i] for i in intl_bond_indices])},
{'type': 'ineq', 'fun': lambda x: np.sum([x[i] for i in alternative_indices]) - 0}, {'type': 'ineq', 'fun': lambda x: .1- np.sum([x[i] for i in alternative_indices])},
{'type': 'ineq', 'fun': lambda x: np.sum([x[i] for i in hedge_fund_indices]) - 0}, {'type': 'ineq', 'fun': lambda x: .1- np.sum([x[i] for i in hedge_fund_indices])}]
The code below divides up different account types (Trust, IRA, and Roth) by the weighting of the total portfolio. So for example the final weighting of the securities in the Trust account (or index 0 of total_array) need to be equal to .41646367
account_weight_array = [0.41646367, 0.42259312, 0.16094321]
total_array = [['SCHB', 'INTF', 'EMGF', 'SCHR', 'LRGF', 'TFI', 'PGX'], ['CORP', 'SCHZ', 'QSPIX', 'AQMIX', 'LALDX', 'FNDX', 'BKLN', 'PCY', 'SPLV', 'SCHP', 'ZROZ'], ['FNDF', 'PSLDX', 'FNDC', 'SCHF', 'EMLC', 'FNDX', 'FNDE']]
for i in np.arange(3):
cons.append({'type': 'eq', 'fun': lambda x: np.sum([x[r] for r in np.arange(len(total_array[i]))]) - account_weighting_array[i]})
The termination message is "Linear Submatrix C in LSQ subproblem." However, the following code which in theory should be the same exact thing does work:
cons.append({'type': 'eq', 'fun': lambda x: np.sum([x[r] for r in np.arange(len(total_array[0]))]) - account_weighting_array[0]})
cons.append({'type': 'eq', 'fun': lambda x: np.sum([x[r] for r in (np.arange(len(total_array[1]))]) - account_weighting_array[1]})
cons.append({'type': 'eq', 'fun': lambda x: np.sum([x[r] for r in (np.arange(len(total_array[2]))]) - account_weighting_array[2]})
I don't understand why running a for- or while-loop doesn't work, but doing it manually on separate lines of code does. I'm really just adding a new element to a list in both cases. What's weird is that when I just have the code build the list of constraints ("cons") and print it, you can see that the new object has successfully been added but for some reason when I run the optimization it won't recognize it.