3

I have obviously read through the documentation, but I have not been able to find a more detailed description of what is happening under the covers. Specifically, there are a few behaviors that I am very confused about:

General setup

import numpy as np
from scipy.integrate import ode

#Constants in ODE
N = 30                                 
K = 0.5                               
w = np.random.normal(np.pi, 0.1, N)

#Integration parameters
y0 = np.linspace(0, 2*np.pi, N, endpoint=False)   
t0 = 0                                                                         

#Set up the solver
solver = ode(lambda t,y: w + K/N*np.sum( np.sin( y - y.reshape(N,1) ), axis=1))
solver.set_integrator('vode', method='bdf')
solver.set_initial_value(y0, t0)

Problem 1: solver.integrate(t0) fails

Setting up the integrator, and asking for the value at t0 the first time returns a successful integration. Repeating this returns the correct number, but the solver.successful() method returns false:

solver.integrate(t0)
>>> array([ 0.        ,  0.20943951,  0.41887902, ...,  5.65486678,
            5.86430629,  6.0737458 ])

solver.successful()
>>> True

solver.integrate(t0)
>>> array([ 0.        ,  0.20943951,  0.41887902, ...,  5.65486678,
            5.86430629,  6.0737458 ])

solver.successful()
>>> False

My question is, what is happening in the solver.integrate(t) method that causes it to succeed the first time, and fail subsequently, and what does it mean to have an “unsuccessful” integration? Furthermore, why does the integrator fail silently, and continue to produce useful-looking outputs until I ask it explicitly whether it was successful?

Related, is there a way to reset the failed integration, or do I need to re-instantiate the solver from scratch?

Problem 2: solver.integrate(t) immediately returns an answer for almost any value of t

Even though my initial value of y0 is given at t0=0, I can request the value at t=10000 and get the answer immediately. I would expect that the numerical integration over such a large time span should take at least a few seconds (e.g. in Matlab, asking to integrate over 10000 time steps would take several minutes).

For example, re-run the setup from above and execute:

solver.integrate(10000)
>>> array([ 2153.90803383,  2153.63023706,  2153.60964064, ...,  2160.00982959,
            2159.90446056,  2159.82900895])

Is Python really that fast, or is this output total nonsense?

Wrzlprmft
  • 4,234
  • 1
  • 28
  • 54
dkv
  • 6,602
  • 10
  • 34
  • 54
  • As to part 1, when I run your code I get a warning and a message "DVODE-- ISTATE (=I1) .gt. 1 but DVODE not initialized " that apparently comes from somewhere in the Fortran internals. That suggests there could be something more going on than just an unsuccessful state. – BrenBarn Apr 13 '17 at 03:11
  • Incidentally, it might be useful to try to reproduce your problems with a simple ODE whose results could be verified analytically or with simpler numerical methods. – BrenBarn Apr 13 '17 at 03:16
  • [Here](https://stackoverflow.com/questions/40788747/internal-working-of-scipy-integrate-ode?rq=1) is a related post I just found for anyone else coming across similar questions. – dkv Apr 13 '17 at 16:10

2 Answers2

4

Problem 0

Don’t ignore error messages. Yes, ode’s error messages can be cryptic at times, but you still want to avoid them.

Problem 1

As you already integrated up to t0 with the first call of solver.integrate(t0), you are integrating for a time step of 0 with the second call. This throws the cryptic error:

 DVODE--  ISTATE (=I1) .gt. 1 but DVODE not initialized      
      In above message,  I1 =         2
/usr/lib/python3/dist-packages/scipy/integrate/_ode.py:869: UserWarning: vode: Illegal input detected. (See printed message.)
  'Unexpected istate=%s' % istate))

Problem 2.1

There is a maximum number of (internal) steps that a solver is going to take in one call without throwing an error. This can be set with the nsteps argument of set_integrator. If you integrate a large time at once, nsteps will be exceeded even if nothing is wrong, and the following error message is thrown:

/usr/lib/python3/dist-packages/scipy/integrate/_ode.py:869: UserWarning: vode: Excess work done on this call. (Perhaps wrong MF.)
  'Unexpected istate=%s' % istate))

The integrator then stops at whenever this happens.

Problem 2.2

If you set nsteps=10**10, the integration runs without problems. It still is pretty fast though (roughly 1 s on my machine). The reason for this is as follows:

For a multi-dimensional system such as yours, there are two main runtime sinks when integrating:

  • Vector and matrix operations within the integrator. In scipy.ode, these are all realised with NumPy operations or ported Fortran or C code. Anyway, they are realised with compiled code without Python overhead and thus very efficient.

  • Evaluating the derivative (lambda t,y: w + K/N*np.sum( np.sin( y - y.reshape(N,1) ), axis=1) in your case). You realised this with NumPy operations, which again are realised with compiled code and very efficient. You may improve this a little bit with a purely compiled function, but that will grant you at most a small factor. If you used Python lists and loops instead, it would be horribly slow.

Therefore, for your problem, everything relevant is handled with compiled code under the hood and the integration is handled with an efficiency comparable to that of, e.g., a pure C program. I do not know how the two above aspects are handled in Matlab, but if either of the above challenges is handled with interpreted instead of compiled loops, this would explain the runtime discrepancy you observe.

Wrzlprmft
  • 4,234
  • 1
  • 28
  • 54
  • 1
    Strangely, I was in fact not receiving any error messages (of course I would not simply ignore them), which is why I was concerned that the integrate() method was failing silently. What I posted was the exact output I saw from my shell. I was using the IPython shell built into Spyder and this is not the first time I've gotten strange behavior. Perhaps the error output was being redirected somewhere and not displayed. – dkv Apr 13 '17 at 15:44
1

To the second question, yes, the output might be nonsense. Local errors, be they from discretization or floating point operations, accumulate with a compounding factor which is about the Lipschitz constant of the ODE function. In a first estimate, the Lipschitz constant here is K=0.5. The magnification rate of early errors, that is, their coefficient as part of the global error, can thus be as large as exp(0.5*10000), which is a huge number.

On the other hand it is not surprising that the integration is fast. Most of the provided methods use step size adaptation, and with the standard error tolerances this might result in only some tens of internal steps. Reducing the error tolerances will increase the number of internal steps and may change the numerical result drastically.

Lutz Lehmann
  • 25,219
  • 2
  • 22
  • 51
  • 1
    Regarding the first paragraph: How would that be different from integrating with many tiny steps? Also, the dynamics in question does not look like one where you care about the global error. – Wrzlprmft Apr 13 '17 at 15:51
  • As a follow-up, is there a way to force the methods to run with a fixed step size? And, is there any documentation for how the error tolerances affect the adaptive step size? – dkv Apr 13 '17 at 15:51
  • @Wrzlprmft (Sorry if this is naive, I'm relatively new to this subject). Why do you say it doesn't look like one where we'd care about global error? Isn't the ultimate goal of numerical integration in general to minimize global error? – dkv Apr 13 '17 at 16:08
  • @dkv: Please do not ask follow-up questions in the comments. Anyway, why would you want to do this? – Wrzlprmft Apr 13 '17 at 16:13
  • @dkv: Your system looks like a bunch of coupled Kuramoto oscillators. With such systems, you are usually more interested in the qualitative behaviour. In particular, the dynamics could be chaotic, in which case, controlling the global error is a lost cause anyway. Here, all you can control is the local error. – Wrzlprmft Apr 13 '17 at 16:17
  • @Wrzlprmft >why would you want to do this Because I would like to plot the output at fixed intervals, and to verify that a solution using a very small step size (which will presumably be more accurate but slower) corresponds to the adaptive step size solution. I would expect that using tens of steps to integrate over 10000 seconds in a potentially stiff system produces wildly incorrect results. And, I don't want to implement a fixed-step method from scratch. – dkv Apr 13 '17 at 16:32
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/141668/discussion-between-wrzlprmft-and-dkv). – Wrzlprmft Apr 13 '17 at 16:37
  • The output is interpolated from the values and derivatives at the internal steps. Thus requesting a denser output sequence will not fundamentally change the internal steps. Decreasing the tolerances will increase the number of internal steps. You could compare solutions that differ by a relative distance of 1e-8 in the initial values. Comparing the paths of such solutions will show you up to where the numerical solution is reliable. – Lutz Lehmann Apr 13 '17 at 21:29