3

I am attempting to perform a multivariate optimization using scipy.optimize.minimize with a constraint, but the constraint is not on each individual variable; rather, it is on the summation of variables.

Here is the quadratic objective:

objective

where A is a symmetric m by m matrix (m is the dimensionality of the points x and y).

The derivative of this function is very nice; A vanishes completely, making the gradient a constant that I can precompute. This is the gradient:

gradient

Here's the Python code I'm using to perform the optimization:

retval = scipy.optimize.minimize(f, A.flatten(),
    args = (S, dAi.flatten(), A.shape[0]),
    jac = True, method = 'SLSQP')

where A is the matrix (flattened), S is the set containing pairs of points x and y, and dAi is the precomputed gradient matrix (also flattened). The objective function f looks like this:

def f(A, S, dfA, k):
    A = A.reshape((k, k))
    return [np.sum([np.dot(x - y, A).dot(x - y) for x, y in S]), dfA]

However, this implementation spins off into infinity and never completes. I haven't been able to specify the summation constraint anywhere because the optimization method expects either bounds or inequality constraints on each variable, rather than on an aggregation.

Is there a way to do this that I'm missing? This question seemed close but never got a solution. This question involves multivariate optimization but was just an issue of an incorrect derivation, and this question seems analogous to my problem but involves Pandas, which I'm not using.

Community
  • 1
  • 1
Magsol
  • 4,640
  • 11
  • 46
  • 68

0 Answers0