0

I wanted to print out the execution time of a solution to the system of equations for every t+dt where dt is .01 sec

import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import time
from datetime import timedelta


# function that returns dz/dt
def model(z,t):
  start_time = time.clock()

  dxdt = z[1]
  dydt =  -1.2*(z[0]+z[1]) + 0.2314*z[0] + 0.6918*z[1] - 0.6245*abs(z[0])*z[1] + 0.0095*abs(z[1])*z[1] + 0.0214*z[0]*z[0]*z[0]
  dzdt = [dxdt,dydt]


#Print
 elapsed_time_secs = time.clock()-start_time
 msg = "Execution took: %s secs (Wall clock time)" % 
 timedelta(seconds=round(elapsed_time_secs,5))
 print(msg)


 return dzdt

# initial condition
 z0 = [0,0]

# time points
 t = np.linspace(0,10,num=1000)

# solve ODE
 z = odeint(model,z0,t)

Since my time intervals are .01 (10/1000), technically I should have a 1000 lines of output statements, but for some reason far fewer lines get printed. This varies according to the initial conditions set. For [0,0], about around 10 lines are print. For [1,2] somewhere around 400 lines are printed. I don't understand why this is happening.

user3397
  • 113
  • 1
  • 1
  • 5

1 Answers1

0

You are doing something strange, you are measuring the execution time of the ODE function. As your few arithmetical operations take some 100, at most a few 1000 clock cycles, and your processor works with 10^9 clock cycles per second, your call will take about 1e-6 seconds, which you will not see in the given output format. What you will see sometimes is some interpreter overhead and JIT compilation times.

odeint works with an internal adaptive step size, the requested outputs are interpolated from the internal steps. The internal step size can be smaller than the time step of the input time array, or in the case of boring solutions such as constant solutions, it can be very large. The ODE function will only be called at the internal steps, not at all the requested output times. If you add t and z to the printed values, you will see that.

(0.0, array([ 0.,  0.]), 'Execution took: 0:00:00.000010 secs (Wall clock time)')
(1.221926641140105e-06, array([ 0.,  0.]), 'Execution took: 0:00:00 secs (Wall clock time)')
(2.44385328228021e-06, array([ 0.,  0.]), 'Execution took: 0:00:00.000010 secs (Wall clock time)')
(0.01222171026468333, array([ 0.,  0.]), 'Execution took: 0:00:00.000010 secs (Wall clock time)')
(0.02444097667608438, array([ 0.,  0.]), 'Execution took: 0:00:00.000010 secs (Wall clock time)')
(0.03666024308748543, array([ 0.,  0.]), 'Execution took: 0:00:00.000010 secs (Wall clock time)')
(0.15885290720149592, array([ 0.,  0.]), 'Execution took: 0:00:00.000010 secs (Wall clock time)')
(0.28104557131550645, array([ 0.,  0.]), 'Execution took: 0:00:00 secs (Wall clock time)')
(0.403238235429517, array([ 0.,  0.]), 'Execution took: 0:00:00 secs (Wall clock time)')
(1.625164876569622, array([ 0.,  0.]), 'Execution took: 0:00:00 secs (Wall clock time)')
(2.8470915177097273, array([ 0.,  0.]), 'Execution took: 0:00:00 secs (Wall clock time)')
(4.069018158849833, array([ 0.,  0.]), 'Execution took: 0:00:00 secs (Wall clock time)')
(16.288284570250884, array([ 0.,  0.]), 'Execution took: 0:00:00 secs (Wall clock time)')
Lutz Lehmann
  • 25,219
  • 2
  • 22
  • 51
  • Could you suggest what I should do to get the execution times then? – user3397 May 07 '18 at 14:35
  • What execution times are you looking for? Are you trying to implement your own ersatz profiler? Repeat the derivative formulas computation 1000 times to get something appreciable, I get `00.001740 secs` which is `1.74e-6 sec` for a single evaluation as estimated above. – Lutz Lehmann May 07 '18 at 14:53
  • I want to compute the execution time for every time a numerical value is calculated, that would be an output of 1000 values. Basically for every t+dt , starting from t=0 and step size (dt) of .01 until t=10 sec – user3397 May 07 '18 at 15:12
  • That would happen inside the FORTRAN code of `lsoda` that is behind `odeint`. As I said, these output values are interpolated from the points and derivatives of the internal integration steps. Just measure the full `odeint` time, subtract the time for the derivatives evaluation (remove any print statements as console or file output is slow) and divide by `len(t)`. – Lutz Lehmann May 07 '18 at 15:24
  • But that would only give me an average value, right? I wanted to find the real time execution times – user3397 May 07 '18 at 15:28
  • Then you would have to write your own integrator so that you can add timing instructions at the appropriate points. All I'm saying is that your idea of how the solver works is not what the solver really does. – Lutz Lehmann May 07 '18 at 15:32
  • Oh I really wanted to avoid doing that. I had earlier tried using a function I wrote using the euler method, which didn't work out and then I switched to using odeint – user3397 May 07 '18 at 15:36
  • You can of course time each step `z[i+1,:] = odeint(model, z[i], [t[i], t[i+1]])[-1]`. But this computation is different - internally and also in the errors of the result - from the one where all is computed in one go. – Lutz Lehmann May 07 '18 at 16:37
  • I tried using the Euler method again, and again the outputs come out to be wrong. Is there some way I could use odeint as well as find the execution times for each value calculated ? Or is using my own function the way to go? – user3397 May 08 '18 at 02:52