0

I would like to try different strategies to fit my data to a given model, with boundaries and providing the analytical Jacobian to improve the results. I can run the optimize.least_squares, e.g.:

sol = optimize.least_squares(cost_function, x0=x0, args=(data, sys_path, configfile), loss='soft_l1', f_scale=0.1, max_nfev=500,  jac=grad_V , bounds = param_bounds)

but I found several errors when I try optimize.minimize with any of the methods available using the same parameters and data (see below):

  • method='L-BFGS-B':
sol = optimize.minimize(cost_function, x0=x0, args=(data, sys_path, configfile), method='L-BFGS-B', jac=grad_V, bounds = param_bounds)

[..] line 594, in fitting_model
    sol = optimize.minimize(cost_function_vector, x0=x0, args=(data, sys_path, configfile), method='L-BFGS-B', jac=grad_V, bounds = param_bounds)
  File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_minimize.py", line 600, in minimize
    callback=callback, **options)
  File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/lbfgsb.py", line 267, in _minimize_lbfgsb
    raise ValueError('length of x0 != length of bounds')
ValueError: length of x0 != length of bounds
  • If I do ,e.g., np.transpose(param_bounds):
sol = optimize.minimize(cost_function, x0=x0, args=(data, sys_path, configfile), method='L-BFGS-B', jac=grad_V, bounds = np.transpose(param_bounds))

[...], line 594, in fitting_model
    sol = optimize.minimize(cost_function_vector, x0=x0, args=(data, sys_path, configfile), method='L-BFGS-B', jac=grad_V, bounds = np.transpose(param_bounds))
  File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/_minimize.py", line 600, in minimize
    callback=callback, **options)
  File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scipy/optimize/lbfgsb.py", line 328, in _minimize_lbfgsb
    isave, dsave, maxls)
ValueError: too many axes: 2 (effrank=2), expected rank=1

Again, it works with the least_squares, so I guess the problem is in how I pass the arguments (why are they different from least_squares??) rather than in the code. However, following the documentation and the examples found, I cannot understand where is the error.

I would really appreciate any help on this. Thanks in advance.


I was about to put the code, examples of data and so on but, to avoid making it too long, I think it is enough with the following info. Let me know if I'm wrong and something else is needed.

  • There are 5 params: S0, d, theta, phi, f. These and other parameters needed (e.g. arrays bvals and bvecs) are read from a configfile and other required functions from sys_path (passed in args).

  • The cost_function returns a vector F (data, Sj and therefore F are arrays of shape (65,) ). Basically, it is something like:

def cost_function(data, sys_path, configfile):
[...]
v = get_v(theta,phi)
Sj = s0 * ((1 - f) * np.exp(-bvals * d) + f * np.exp(-bvals * d * np.transpose(np.square((np.transpose(bvecs) * v)))) ) ##vector v depends on theta and phi
F = Sj - data
return F
  • x0 (initialization for the 5 params) provided in an array of shape (5,). E.g.: array([ 1.46732834e+02, 1.00000000e-03, 6.81814424e-01, -2.07406186e-01, 1.27062049e+01])

  • Lower and Upper boundaries provided as param_bounds with shape (2,5). E.g.,

array([[ 0.0e+00,  1.0e-06, -1.0e+04, -1.0e+04,  0.001],
       [ 1.0e+05,  1.0e-02,  1.0e+04,  1.0e+04,  1]])
  • grad_V is the is an user-defined jacobian function that returns an array of shape (65,5) and, as the cost_function, takes args=(data, sys_path, configfile) as input arguments.
JP Manzano
  • 11
  • 5
  • You should include an example of design vector and bounds, which cause the problem. Are you giving them in a format [x1, x2, ..., xn] and [(l1, u1), (l2, u2), ..., (ln, un)], where l is lower bound and u is upper bound of a design variable x. – onodip Jan 08 '20 at 22:19
  • Thanks @onodip. I have found a possible solution. I was returning a vector as the result of the `cost_function` (and also for each differentation of the jacobian). However, if I do the sum of the vector (I thought it was done internally by the `optimize`), returning only a value rather than a vector, it works perfectly with the different methods. If someone can confirm this is correct, I could update the question and leave it as solved. – JP Manzano Jan 09 '20 at 13:55

0 Answers0