2

I'm trying to minimize the mse between 2 functions with bounds, curve_fit is doing it very well, but I want to stop the computation when the mse between the two functions is lower than 0.1. Here is a simple example code

import numpy as np
from scipy import optimize, integrate

def sir_model(y, x, beta, gamma):
    sus = -beta * y[0] * y[1] / N
    rec = gamma * y[1]
    inf = -(sus + rec)
    return sus, inf, rec

def fit_odeint(x, beta, gamma):
    return integrate.odeint(sir_model, (sus0, inf0, rec0), x, args=(beta, gamma))[:,1]

population = float(1000)
xdata = np.arange(0,335,dtype = float)
upper_bounds = np.array([1,0.7])

N = population
inf0 = 10
sus0 = N - inf0
rec0 = 0.0

#curve to approximate
ydata = fit_odeint(xdata, beta = 0.258, gamma = 0.612)

popt, pcov = optimize.curve_fit(fit_odeint, xdata, ydata,bounds=(0, upper_bounds))

The problem is that the real problem is harder. So I want to stop the function curve_fit with a fixed tolerence (mse = 0.1). I tried with ftol but it doesn't seems to work.

joni
  • 6,840
  • 2
  • 13
  • 20

1 Answers1

1

If I understand you correctly, you want to terminate the optimization as soon as the objective of the underlying least-squares optimization problem is <= 0.1. Unfortunately, neither curve_fit nor least_squares support callbacks or tolerances for the objective value. However, scipy.optimize.minimize does. So let's use it.

For this purpose we have to formulate your curve fitting problem as minimization problem:

min ||ydata - fit_odeint(xdata, *coeffs)||**2

s.t. lb <= coeffs <= ub

Then, we solve the problem via minimize and use a callback that terminates the algorithm as soon as the objective function value is <= 0.1:

from scipy.optimize import minimize
from numpy.linalg import norm

# the objective function
def obj(coeffs):
    return norm(ydata - fit_odeint(xdata, *coeffs))**2

# bounds
bnds = [(0, 1), (0, 0.7)]

# initial point
x0 = np.zeros(2)

# xk is the current parameter vector and state is an OptimizeResult object
def my_callback(xk, state):
    if state.fun <= 0.1:
        return True

# call the solver (res.x contains your coefficients)
res = minimize(obj, x0=x0, bounds=bnds, method="trust-constr", callback=my_callback)

This gives me:

 barrier_parameter: 0.1
 barrier_tolerance: 0.1
          cg_niter: 3
      cg_stop_cond: 4
            constr: [array([0.08205584, 0.44233162])]
       constr_nfev: [0]
       constr_nhev: [0]
       constr_njev: [0]
    constr_penalty: 1635226.9491785716
  constr_violation: 0.0
    execution_time: 0.09222197532653809
               fun: 0.007733264340965375
              grad: array([-3.99185467,  4.04015523])
               jac: [<2x2 sparse matrix of type '<class 'numpy.float64'>'
    with 2 stored elements in Compressed Sparse Row format>]
   lagrangian_grad: array([-0.03385145,  0.19847042])
           message: '`callback` function requested termination.'
            method: 'tr_interior_point'
              nfev: 12
              nhev: 0
               nit: 4
             niter: 4
              njev: 4
        optimality: 0.1984704226037759
            status: 3
           success: False
         tr_radius: 7.0
                 v: [array([ 3.95800322, -3.8416848 ])]
                 x: array([0.08205584, 0.44233162])

Note that the callback's signature only works for the 'trust-constr' method, for the other methods the signature is callback(xk) -> bool, i.e. you'd need to calculate the objective function value on your own inside the callback.

joni
  • 6,840
  • 2
  • 13
  • 20
  • The real problem is a 21 parameters calibration. I tried method 'trust-constr' but it doesn't seem to work or for a to big time calculation. I wanted to use curve_fit because the results are good, do you know another way to stop this method ? Can I use `gtol`, `ftol` or `xtol` ? – Maxence David Jul 07 '21 at 08:05
  • 1
    @MaxenceDavid As already stated in my answer, `curve_fit` doesn't support a tolerance for the objective function value. You can either use another stopping criterion based on `gtol`, `ftol` or `xtol` (see the documentation for their meaning), or use another method for `minimize` and rewrite the callback. Additionally, you can accelerate the solver by providing the exact objective gradient. – joni Jul 07 '21 at 08:26