2

Using Python 3.6, I am trying to minimize a function using scipy.optimize.minimize. My minimization problem as two constraints, and I can find a solution. So far, I have the following:

import numpy as np
from scipy.optimize import minimize

array = np.array([[3.0, 0.25, 0.75],
                  [0.1, 0.65, 2.50],
                  [0.80, 2.5, 1.20],
                  [0.0, 0.25, 0.15],
                  [1.2, 2.40, 3.60]])

matrix = np.array([[1.0, 1.5, -2.],
                   [0.5, 3.0, 2.5],
                   [1.0, 0.25, 0.75]])


def fct1(x):
    return -sum(x.dot(array.T))


def fct2(x):
    return x.dot(matrix).dot(x)

x0 = np.ones(3) / 3
cons = ({'type': 'eq', 'fun': lambda x: x.sum() - 1.0},
        {'type': 'eq', 'fun': lambda x: fct2(x) - tgt})

tgt = 0.15

w = minimize(fct1, x0, method='SLSQP', constraints=cons)['x']
res1 = fct1(w)
res2 = fct2(w)

I am now trying to get my optimizer to run faster as this is only a simplified problem. In the end, my arrays and matrices are way bigger. In a previous question, somebody came up with the idea of defining the jacobian of my function to optimize, so I added the following:

def fct1_deriv(x):
    return -sum(np.ones_like(x).dot(array.T))

w = minimize(fct1, x0, method='SLSQP', jac=fct1_deriv, constraints=cons)['x']

Problem is I get the following error message when trying to run:

0-th dimension must be fixed to 4 but got 2
Traceback (most recent call last):
  File "C:\Anaconda2\envs\py36\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-111-d1b854178c13>", line 1, in <module>
    w = minimize(fct1, x0, method='SLSQP', jac=fct1_deriv, constraints=cons)['x']
  File "C:\Anaconda2\envs\py36\lib\site-packages\scipy\optimize\_minimize.py", line 458, in minimize
    constraints, callback=callback, **options)
  File "C:\Anaconda2\envs\py36\lib\site-packages\scipy\optimize\slsqp.py", line 410, in _minimize_slsqp
    slsqp(m, meq, x, xl, xu, fx, c, g, a, acc, majiter, mode, w, jw)
_slsqp.error: failed in converting 8th argument `g' of _slsqp.slsqp to C/Fortran array

Any ideas on what the problem could be? The link to my previous answer is here: What is the fastest way to minimize a function in python?

Community
  • 1
  • 1
Eric B
  • 1,635
  • 4
  • 13
  • 27

2 Answers2

0

Your function to minimize takes a 3 vector as input, so your Jacobian should correspondingly be a 3 vector as well, each component being the partial derivative with regard to the corresponding input component. SciPy is complaining about not knowing what to do with the single value you are giving it.

In your case, I think this is what you want:

def fct1_deriv(x):
    return -np.sum(array, axis=1)

Also, if speed is a concern you probably want to use np.sum, not sum, in fct1.

Jaime
  • 65,696
  • 17
  • 124
  • 159
  • I tried to run your solution and it still gives me the same error message. However, I don't think that my Jacobian should be a 3 vector. My Jacobian is the derivative of my function to optimize, which yields a single value, and not a 3 vector. My Jacobian should therefore also yield a single value as a result. The fact that I have using a vector is only to make my function a bit cleaner. – Eric B May 05 '17 at 02:18
  • Is it possible that as my function to optimize is a linear function, the Jacobian would be useless as it is no longer a function of my array x? – Eric B May 05 '17 at 02:23
0

I think I finally found the answer, and will post here so people can use it or correct me:

In an optimization problem of the form y = x^2, the solution can be found by differentiating y with respect to x, and solving by setting the derivative equal to 0. Therefore, the solution can be found with 2x = 0.0 (solving for x=0.0). Therefore, I have the feeling that passing the Jacobian (first derivative) of a function to optimize helps the optimizer in finding a solution.

In the problem I am trying to optimize, my function is of the form y = x. This function can't be optimize (besides giving it constraints) by differentiating y with respect to x. That would lead to the following equation : 1.0 = 0.0. Therefore giving the Jacobian of my function to my optimizer is probably causing the problem because of the above.

Eric B
  • 1,635
  • 4
  • 13
  • 27