5

I'm trying to investigate the behavior of the following Delayed Differential Equation using Python:

y''(t) = -y(t)/τ^2 - 2y'(t)/τ - Nd*f(y(t-T))/τ^2,

where f is a cut-off function which is essentially equal to the identity when the absolute value of its argument is between 1 and 10 and otherwise is equal to 0 (see figure 1), and Nd, τ and T are constants.

Figure 1: Plot of function f

For this I'm using the package JiTCDDE. This provides a reasonable solution to the above equation. Nevertheless, when I try to add a noise on the right hand side of the equation, I obtain a solution which stabilize to a non-zero constant after a few oscillations. This is not a mathematical solution of the equation (the only possible constant solution being equal to zero). I don't understand why this problem arises and if it is possible to solve it.

I reproduce my code below. Here, for the sake of simplicity, I substituted the noise with an high-frequency cosine, which is introduced in the system of equation as the initial condition for a dummy variable (the cosine could have been introduced directly in the system, but for a general noise this doesn't seem possible). To simplify further the problem, I removed also the term involving the f function, as the problem arises also without it. Figure 2 shows the plot of the function given by the code.

Figure 2: Plot of the solution given by the code

from jitcdde import jitcdde, y, t
import numpy as np
from matplotlib import pyplot as plt
import math
from chspy import CubicHermiteSpline


# Definition of function f:
def functionf(x):
    return x/4*(1+symengine.erf(x**2-Bmin**2))*(1-symengine.erf(x**2-Bmax**2))

#parameters:
τ = 42.9
T = 35.33
Nd = 8.32

# Definition of the initial conditions:
dt = .01  # Time step.
totT = 10000.  # Total time.
Nmax = int(totT / dt)  # Number of time steps.
Vt = np.linspace(0., totT, Nmax)  # Vector of times.

# Definition of the "noise"
X = np.zeros(Nmax)
for i in range(Nmax):
    X[i]=math.cos(Vt[i])
past=CubicHermiteSpline(n=3)
for time, datum in zip(Vt,X):
    regular_past = [10.,0.]
    past.append((
        time-totT,
        np.hstack((regular_past,datum)),
        np.zeros(3)
        ))
noise= lambda t: y(2,t-totT)


# Integration of the DDE
g = [
     y(1),
     -y(0)/τ**2-2*y(1)/τ+0.008*noise(t)
     ]
g.append(0)


DDE = jitcdde(g) 
DDE.add_past_points(past)   
DDE.adjust_diff()

data = []
for time in np.arange(DDE.t, DDE.t+totT, 1):
    data.append( DDE.integrate(time)[0] )

plt.plot(data)
plt.show()

Incidentally, I noticed that even without noise, the solution seems to be discontinuous at the point zero (y is set to be equal to zero for negative times), and I don't understand why.

Wrzlprmft
  • 4,234
  • 1
  • 28
  • 54
tigro
  • 53
  • 4
  • Your example code does not run as it is. When fixing that with a few educated guesses, I get a similar result as yours but with convergence to zero. This result persists if I replace the input method with a straightforward cosine and also if I integrate the respective ODE just with `scipy.integrate.ode`, which strongly suggests that what I observe is the actual dynamics of the system. But then convergence to another value is exactly what your problem is. – Wrzlprmft Jun 24 '20 at 07:39
  • On another note, I recently added [an interface to add inputs](https://github.com/neurophysik/jitcdde/issues/25#issuecomment-593642018) as well as [callbacks](https://github.com/neurophysik/jitcxde_common/issues/1#issuecomment-615290987). Both allow you to work with input noise without fiddling with a dummy dynamical variable and all the errors that come with it. – Wrzlprmft Jun 24 '20 at 07:47
  • Hi @Wrzlprmft, thank you for your comments. Yes there was a typo in the code, sorry about that, now it should be fixed. Yes, in this simplified version the problem seems to be convergence to the wrong value. Actually I obtain a very similar plot for the original equation (the one at the beginning of my post), whereas an oscillation would be expected (this can be verified with the help of Mathematica). I just thought that fixing the simpler case would help to solve the original problem. I will try to use the new interface to add the noise directly, I will let you know if it works in this case. – tigro Jun 24 '20 at 08:02
  • @Wrzlprmft by the way do you have any idea why the solution seems to be discontinuous at 0? – tigro Jun 24 '20 at 08:12
  • Note that as your question stands, my first comment fully applies: This seems to be the actual dynamics of the system, as evidenced by an ODE simulation with distinct modules yielding the same result. – Wrzlprmft Jun 24 '20 at 08:26
  • *do you have any idea why the solution seems to be discontinuous at 0?* – Without an example to observe this, it’s hard to tell. JiTCDDE solutions are continuous by construction. There may be an apparent discontinuity of the derivative at the start of the integration but [that’s typical for DDEs](https://jitcdde.readthedocs.io/en/stable/#dealing-with-initial-discontinuities). – Wrzlprmft Jun 24 '20 at 08:28
  • Adjust_diff seems to solve the simplified problem (now the code in the post seems to work). I also added to the code the definition of the function f. I tried to run the original code (the one for the equation at the beginning of the post) with the noise instead of the cosine and using adjust_diff instead of step_on_discontinuities. Now I obtain a very reasonable solution, which is also continuous at 0. I obtained discontinuous solutions when using step_on discontinuities in the original code. – tigro Jun 24 '20 at 08:52

1 Answers1

2

As the comments unveiled, your problem eventually boiled down to this:

step_on_discontinuities assumes delays that are small with respect to the integration time and performs steps that are placed on those times where the delayed components points to the integration start (0 in your case). This way initial discontinuities are handled.

However, implementing an input with a delayed dummy variable introduces a large delay into the system, totT in your case. The respective step for step_on_discontinuities would be at totT itself, i.e., after the desired integration time. Thus when you reach for time in np.arange(DDE.t, DDE.t+totT, 1): in your code, DDE.t is totT. Therefore you have made a big step before you actually start integrating and observing which may seem like a discontinuity and lead to weird results, in particular you do not see the effect of your input, because it has already “ended” at this point. To avoid this, use adjust_diff or integrate_blindly instead of step_on_discontinuities.

Wrzlprmft
  • 4,234
  • 1
  • 28
  • 54
  • Thank you so much for finding the problem! I will definitely accept this answer. May I just ask you what would be the best way to fix it? Should I set a small maximal step in step_on_discontinuities? Or should I just use the new interface to add the noise, in order to avoid the big delay totT? Or would you recommend to use adjust_diff? – tigro Jun 24 '20 at 09:31
  • See my edit. Even with `jitcdde_input`, you should not use `step_on_discontinuities`. In fact, this question alerted me that JiTCDDE should throw an error should in this case, which [it now does](https://github.com/neurophysik/jitcdde/commit/fdd5af8c56c0d7d7abcf60514ba6b27bf10887d2). – Wrzlprmft Jun 24 '20 at 10:19