2

I'm currently exploring the Lorenz system with Python and R and have noticed subtle differences in the ode packages. odeint from Python and ode both say they use lsoda to calculate their derivatives. However, using the lsoda command for both seems to give far different results. I have tried ode45 for the ode function in R to get something more similar to Python but am wondering why I can't get exactly the same results:

from scipy.integrate import odeint
def lorenz(x, t):
    return [
        10 * (x[1] - x[0]),
        x[0] * (28 - x[2]) - x[1],
        x[0] * x[1] - 8 / 3 * x[2],
    ]


dt = 0.001
t_train = np.arange(0, 0.1, dt)
x0_train = [-8, 7, 27]
x_train = odeint(lorenz, x0_train, t_train)


x_train[0:5, :]
array([[-8.        ,  7.        , 27.        ],
       [-7.85082366,  6.98457874, 26.87275343],
       [-7.70328919,  6.96834721, 26.74700467],
       [-7.55738803,  6.95135316, 26.62273959],
       [-7.41311133,  6.93364263, 26.49994363]])
library(deSolve)
n <- round(100, 0)
# Lorenz Parameters: sigma, rho, beta
parameters <- c(s = 10, r = 28, b = 8 / 3)
state <- c(X = -8, Y = 7, Z = 27) # Initial State
# Lorenz Function used to generate Lorenz Derivatives
lorenz <- function(t, state, parameters) {
  with(as.list(c(state, parameters)), {
    dx <- parameters[1] * (state[2] - state[1])
    dy <- state[1] * (parameters[2] - state[3]) - state[2]
    dz <- state[1] * state[2] - parameters[3] * state[3]
    list(c(dx, dy, dz))
  })
}
times <- seq(0, ((n) - 1) * 0.001, by = 0.001)
# ODE45 used to determine Lorenz Matrix
out <- ode(y = state, times = times,
           func = lorenz, parms = parameters, method = "ode45")[, -1]
out[1:nrow(out), , drop = FALSE]
             X        Y        Z
 [1,] -8.00000000 7.000000 27.00000
 [2,] -7.85082366 6.984579 26.87275
 [3,] -7.70328918 6.968347 26.74700
 [4,] -7.55738803 6.951353 26.62274
 [5,] -7.41311133 6.933643 26.49994

I had to call out[1:nrow(out), , drop = FALSE] to get the fully provided decimal places, it appears that head rounds to the nearest fifth. I understand it's incredibly subtle, but was hoping to get the exact same results. Does anyone know if this is something more than a rounding issue between R and Python?

Thanks in advance.

AW27
  • 481
  • 3
  • 15

1 Answers1

2

All numerical methods that solve ODEs are approximations that work up to a given precision. The precision of the deSolve solvers is set to atol=1e-6, rtol=1e-6 by default, where atol is absolute and rtol is relative tolerance. Furthermore, ode45 has some additional parameters to fine-tune the automatic step size algorithm, and it can make use of interpolation.

To increase tolerance, set for example:

out <- ode(y = state, times = times, func = lorenz, 
  parms = parameters, method = "ode45", atol = 1e-10, rtol = 1e-10)

Finally, I would recommend to use an odepack solver like lsoda or vode instead of the classical ode45. More details can be found in the ode and lsoda help pages and for ode45 in the ?rkMethod help page.

Similar parameters may also exist for odeint.

A final note: as Lorenz is a chaotic system, local errors will lead to diverging behaviour due to error magnification. This is an essential feature of chaotic systems, which are by theory unpredictable on the long run. So whatever you do, and how much precision you set, simulated trajectories are not "the real ones", they just show a similar pattern.

tpetzoldt
  • 5,338
  • 2
  • 12
  • 29
  • Thanks, I was able to take your advice and use `lsoda` for both packages and reproduce the same results for the `X` variable by adjusting the tolerance figures to what `odeint` has as its default. However, why do `Y` and `Z` still give different results? – AW27 May 20 '21 at 16:49
  • is `odeint` using the same tolerances as `ode()` ? – Ben Bolker May 20 '21 at 16:51
  • I changed the tolerances based on the `odeint` tolerances which are `1.49012e-8` – AW27 May 20 '21 at 16:52
  • The implementation of `ode45` differs between Python and R, they just uses the same Runge-Kutta pair (Dormand-Prince 4, 5). In contrast, `lsoda` is the odepack algorithm, but it was also somewhat modified for R to allow additional features. Not sure if this changes the "exact" tolerance, but this is in general not of interest, as all solvers are approximations anyway. The good news is, that all solvers in deSolve underwent heavy benchmarking, see examples in package **deTestSet**. – tpetzoldt May 20 '21 at 16:56
  • But: note the edit regarding chaotic systems! – tpetzoldt May 20 '21 at 16:56
  • Thanks, so the issue was the tolerances weren't the same, the rest could be because of every so subtle differences in the results which could occur because of rounding or solvers between `R` and `Python`? – AW27 May 20 '21 at 16:59
  • 2
    Yes, due to rounding, subtle differences in the algorithm and (most important) error magnification by the chaotic system. – tpetzoldt May 20 '21 at 17:01
  • Another comment: R's **deSolve** uses "fail-safe" defaults for the maximal internal time step `hmax`, derived from the external step size in `times`. This decision was made to avoid pitfalls with non-autonomous systems that have external forcings. As a side effect, deSolve is sometimes more precise than requested by atol and rtol, at the cost of a smaller internal stepsize. This can be overwritten by setting `hmax` (and `hini`) to values larger than the step size given in `times`. – tpetzoldt May 20 '21 at 21:53