4

I'm relatively new to python and scipy, being a convert from MATLAB. I was doing a quick test of the odeint function in scipy.integrate, and came across this potential bug. Consider the following snippet:

from scipy import *
from scipy.integrate import odeint
from scipy.interpolate import interp1d
from pylab import *

# ODE system with forcing function u(t)
def sis(x,t,u):
    return [x[1], u(t)]

# Solution time span
t = linspace(0, 10, 1e3)

# Profile for forcing function u(t)
acel = lambda t: 3*(t<2)-3*(t>8)

# Interpolator for acelerator
acel_interp = interp1d(t, acel(t), bounds_error=False, fill_value=0)    

# ODE integration with u(t) = acel, a lambda function
x_1 = odeint(sis, [0,0], t, args=(acel,) )            # Correct result
# ODE integration with u(t) = acel_interp, an interpolator
x_2 = odeint(sis, [0,0], t, args=(acel_interp,) )     # Incorrect result

I have made a plot that illustrates the difference of both results, click here.

What do you make of this, at least for me, unwarranted difference in results? I'm using NumPy version 1.5.0 and SciPy version 0.8.0 on top of Python 2.6.6

Luis E.
  • 841
  • 11
  • 15
  • 1
    Congrats on the switch from Matlab. I made the move about 18 months ago. It's great to have access to all the other Python capabilities and be able to share your code with people that haven't paid thousands for a Matlab license. – Carl F. Apr 17 '11 at 11:41

1 Answers1

3

This isn't a bug. The issue is with the fact that you've turned bound_error to False and filled those values with zeros. If you set bound_error to True in your original code, you can see that you are exceeding the bounds of your interpolation and thus are putting in zeros in the integration (and thus producing a different value than if you evaluated the function at those points outside of the range as you do with the lambda for x_1).

Try the following and you'll see that things are operating properly. Basically I've just extended t to cover a range of values large enough to cover the range you are using the interpolation over.

from scipy import *
from scipy.integrate import odeint
from scipy.interpolate import interp1d
from pylab import *

# ODE system with forcing function u(t)
def sis(x,t,u):
    return [x[1], u(t)]

# Solution time span
t = linspace(0, 10, 1e3)
t_interp = linspace(0,20,2e3)

# Profile for forcing function u(t)
acel = lambda t: 3*(t<2)-3*(t>8)

# Interpolator for acelerator
acel_interp = interp1d(t_interp, acel(t_interp))    

# ODE integration with u(t) = acel, a lambda function
x_1 = odeint(sis, [0,0], t, args=(acel,) )            
# ODE integration with u(t) = acel_interp, an interpolator
x_2 = odeint(sis, [0,0], t, args=(acel_interp,) )     
JoshAdel
  • 66,734
  • 27
  • 141
  • 140
  • You beat me by 13 minutes :) When I catch the first exception thrown by `u(t)` when `sis()` gets evaluated, the value for `t` is `17.8811987021`. Yet, `odeint()` is told to integrate from 0 to 10. How does that work? – lafras Apr 13 '11 at 14:32
  • 1
    Look at `x_2,idct = odeint(sis, [0,0], t, args=(acel_interp,) ,full_output=True); idct['tcur']`. Whatever integration scheme is being used requires evaluations outside of the range of `t`. As the documentation states: "'tcur' - vector with the value of t reached for each time step. (will always be at least as large as the input times)." – JoshAdel Apr 13 '11 at 14:40
  • Thanks for your wisdom! I've accepted your answer as it correctly works. However I still don't get why it didn't work with the original code, since the initial timespan (the `t` array) includes all the relevant changes in `acel(t)` (the last transition occurs at t = 8, and the timespan is set to t = 10). Could you please elaborate further on this? – Luis E. Apr 15 '11 at 10:50
  • @Luis: You'll have to look up the implementation of odeint which is based on the lsoda algorithm from ODEPACK to answer that question exactly. – JoshAdel Apr 15 '11 at 12:42