0

I start with the optimization function from scipy.

I tried to create my code by copying the Find optimal vector that minimizes function solution

I have an array that contains series in columns. I need to multiply each of them by a weight so that the sum of last row of these columns multiplied by the weights gives a given number (constraint).

The sum of the series multiplied by the weights gives a new series where I extract the max-draw-down and I want to minimize this mdd.

I wrote my code as best as I can (2 months of Python and 3 hours of scipy) and can't solve the error message on the function used to solve the problem.

Here is my code and any help would be much appreciated:

import numpy as np
from scipy.optimize import fmin_slsqp

# based on: https://stackoverflow.com/questions/41145643/find-optimal-vector-that-minimizes-function
# the number of columns (and so of weights) can vary; it should be generic, regardless the number of columns

def mdd(serie):  # finding the max-draw-down of a series (put aside not to create add'l problems)
    min = np.nanargmax(np.fmax.accumulate(serie) - serie)
    max = np.nanargmax((serie)[:min])
    return serie[np.nanargmax((serie)[:min])] - serie[min]  # max-draw-down

# defining the input data
# mat is an array of 5 columns containing series of independent data
mat = np.array([[1, 0, 0, 1, 1],[2, 0, 5, 3, 4],[3, 2, 4, 3, 7],[4, 1, 3, 3.1, -6],[5, 0, 2, 5, -7],[6, -1, 4, 1, -8]]).astype('float32')
w = np.ndarray(shape=(5)).astype('float32')  # 1D vector for the weights to be used for the columns multiplication
w0 = np.array([1/5, 1/5, 1/5, 1/5, 1/5]).astype('float32') # initial weights (all similar as a starting point)
fixed_value = 4.32  # as a result of constraint nb 1
# testing the operations that are going to be used in the minimization
series = np.sum(mat * w0, axis=1)

# objective:
# minimize the mdd of the series by modifying the weights (w)
def test(w, mat):
    series = np.sum(mat * w, axis=1)
    return mdd(series)

# constraints:
def cons1(last, w, fixed_value):  # fixed_value = 4.32
    # the sum of the weigths multiplied by the last value of each column must be equal to this fixed_value
    return np.sum(mat[-1, :] * w) - fixed_value

def cons2(w):  # the sum of the weights must be equal to 1
    return np.sum(w) - 1

# solution:
# looking for the optimal set of weights (w) values that minimize the mdd with the two contraints and bounds being respected
# all w values must be between 0 and 1
result = fmin_slsqp(test, w0, f_eqcons=[cons1, cons2], bounds=[(0.0, 1.0)]*len(w), args=(mat, fixed_value, w0), full_output=True)
weights, fW, its, imode, smode = result
print(weights)
tibibou
  • 164
  • 10

1 Answers1

1

You weren't that far off the mark. The biggest problem lies in the mdd function: In case there is no draw-down, your function spits out an empty list as an intermediate result, which then can no longer cope with the argmax function.

def mdd(serie):  # finding the max-draw-down of a series (put aside not to create add'l problems)
    i = np.argmax(np.maximum.accumulate(serie) - serie) # end of the period
    start = serie[:i]
    # check if there is dd at all
    if not start.any():
        return 0
    j = np.argmax(start) # start of period
    return serie[j] - serie[i]  # max-draw-down

In addition, you must make sure that the parameter list is the same for all functions involved (cost function and constraints).

# objective:
# minimize the mdd of the series by modifying the weights (w)
def test(w, mat,fixed_value):
    series = mat @ w
    return mdd(series)

# constraints:
def cons1(w, mat, fixed_value):  # fixed_value = 4.32
    # the sum of the weigths multiplied by the last value of each column must be equal to this fixed_value
    return mat[-1, :] @ w - fixed_value

def cons2(w, mat, fixed_value):  # the sum of the weights must be equal to 1
    return np.sum(w) - 1

# solution:
# looking for the optimal set of weights (w) values that minimize the mdd with the two contraints and bounds being respected
# all w values must be between 0 and 1
result = fmin_slsqp(test, w0, eqcons=[cons1, cons2], bounds=[(0.0, 1.0)]*len(w), args=(mat,fixed_value), full_output=True)

One more remark: You can make the matrix-vector multiplications much leaner with the @-operator.

profmori4rty
  • 126
  • 1
  • 5
  • 1
    It might be worth mentioning that numpy functions like `np.argmax` that operate on the optimization variables are not continuously differentiable and thus the same holds for the objective function. This contradicts the mathematical assumptions of the SLSQP algorithm and can lead to really odd results in practice. – joni Aug 16 '22 at 07:42
  • @joni Very true. In such cases, gradient-free optimization methods are typically the best choice. Here I would probably try the genetic algorithms. – profmori4rty Aug 16 '22 at 08:04
  • First of all, thank you as it works. I learned about the @-operator and the fact that all arguments had to be passed for each function (it was not obvious in documentation). In real life, there always is a mdd (but checking is valuable). – tibibou Aug 16 '22 at 12:27
  • Comment: not sure I got all about the SLSQP algorithm possibly leading to odd results. If you can recommand another one that meets the objectives, I'd like to use it (with the final line of code: result = ...) – tibibou Aug 16 '22 at 12:38