0

Considering the following Leapfrog scheme used to discretize a vectorial wave equation with given initial conditions and periodic boundary conditions. I have implemented the scheme and now I want to make numerical convergence tests to show that the scheme is of second order in space and time.

I'm mainly struggling with two points here:

  1. I'm not 100% sure if I implemented the scheme correctly. I really wanted to use slicing because it is so much faster than using loops.
  2. I don't really know how to get the right error plot, because I'm not sure which norm to use. In the examples I have found (they were in 1D) we've always used the L2-Norm.
import numpy as np
import matplotlib.pyplot as plt


# Initial conditions
def p0(x):
    return np.cos(2 * np.pi * x)


def u0(x):
    return -np.cos(2 * np.pi * x)


# exact solution
def p_exact(x, t):
    # return np.cos(2 * np.pi * (x + t))
    return p0(x + t)


def u_exact(x, t):
    # return -np.cos(2 * np.pi * (x + t))
    return u0(x + t)


# function for doing one time step, considering the periodic boundary conditions
def leapfrog_step(p, u):
    p[1:] += CFL * (u[:-1] - u[1:])
    p[0] = p[-1]
    u[:-1] += CFL * (p[:-1] - p[1:])
    u[-1] = u[0]
    return p, u


# Parameters
CFL = 0.3
LX = 1  # space length
NX = 100  # number of space steps

T = 2  # end time

NN = np.array(range(50, 1000, 50))  # list of discretizations
Ep = []
Eu = []
for NX in NN:
    print(NX)
    errorsp = []
    errorsu = []
    x = np.linspace(0, LX, NX)    # space grid
    dx = x[1] - x[0]  # spatial step
    dt = CFL * dx  # time step
    t = np.arange(0, T, dt)  # time grid

    # TEST

    # time loop
    for time in t:
        if time == 0:
            p = p0(x)
            u = u0(x)
        else:
            p, u = leapfrog_step(p, u)
            errorsp.append(np.linalg.norm((p - p_exact(x, time)), 2))
            errorsu.append(np.linalg.norm((u - u_exact(x, time)), 2))
    errorsp = np.array(errorsp) * dx ** (1 / 2)
    errorsu = np.array(errorsu) * dx ** (1 / 2)
    Ep.append(errorsp[-1])
    Eu.append(errorsu[-1])

# plot the error
plt.figure(figsize=(8, 5))
plt.xlabel("$Nx$")
plt.ylabel(r'$\Vert p-\bar{p}\Vert_{L_2}$')
plt.loglog(NN, 15 / NN ** 2, "green", label=r'$O(\Delta x^{2})$')
plt.loglog(NN, Ep, "o", label=r'$E_p$')
plt.loglog(NN, Eu, "o", label=r'$E_u$')
plt.legend()
plt.show()


I would really appreciate it if someone could quickly check the implementation of the scheme and an indication on how to get the error plot.

jerremaier
  • 23
  • 5
  • Your initialization is wrong, you need to take the initial values in the same leap-frogged scheme that you also compute the values in. Otherwise the order reduces to 1. (Perhaps you should also not switch the argument order of p and u. This is not an error here but might make debugging more difficult in evolved versions of the code.) – Lutz Lehmann Nov 03 '20 at 20:20
  • I realized that that I made a mistake in the implementation of the scheme. With these changes I get the second order error plots (without changing the initialization). But I don't really understand what you mean by your comment, if you could explain further that would be really helpful. – jerremaier Nov 04 '20 at 21:46
  • @LutzLehmann I tried to understand your comment and what I think you meant is to initialize like this `p = p0(x)` `u = np.zeros_like(p)` `u[:-1] += CFL * (p[:-1] - p[1:])` `u[-1] = u[0]`. But then again I don't get the right errors plot with the code from above. – jerremaier Nov 05 '20 at 01:01

1 Answers1

0

Apart from the initialization, I see no errors in your code.

As to the initialization, consider the first step. There you should compute, per the method description, approximations for p(dt,j*dx) from the values of p(0,j*dx) and u(0.5*dt, (j+0.5)*dx). This means that you need to initialize at time==0

u = u_exact(x+0.5*dx, 0.5*dt).

and also need to compare the then obtained solution against u_exact(x+0.5*dx, time+0.5*dt).

That you obtained the correct order is IMO more an artefact of the test problem than an accidentially still correct algorithm.

If no exact solution is known, or if you want to use a more realistic algorithm in the test, you would need to compute the initial u values from p(0,x) and u(0,x) via Taylor expansions

u(t,x) = u(0,x) + t*u_t(0,x) + 0.5*t^2*u_tt(0,x) + ...

u(0.5*dt,x) = u(0,x) - 0.5*dt*p_x(0,x) + 0.125*dt^2*u_xx(0,x) + ...
            = u(0,x) - 0.5*CFL*(p(0,x+0.5*dx)-p(0,x-0.5*dx)) 
                     + 0.125*CFL^2*(u(0,x+dx)-2*u(0,x)+u(0,x-dx)) + ...

It might be sufficient to take just the linear expansion,

u[j] = u0(x[j]+0.5*dx) - 0.5*CFL*(p0(x[j]+dx)-p0(x[j]) 

or with array operations

p = p0(x)
u = u0(x+0.5*dx)
u[:-1] -= 0.5*CFL*(p[1:]-p[:-1])
u[-1]=u[0] 

as then the second order error in the initial data just adds to the general second order error.


You might want to change the space grid to x = np.linspace(0, LX, NX+1) to have dx = LX/NX.


I would define the exact solution and the initial condition the other way around, as that allows more flexibility in the test problems.

# components of the solution
def f(x): return np.cos(2 * np.pi * x)
def g(x): return 2*np.sin(6 * np.pi * x)
# exact solution
def u_exact(x,t): return  f(x+t)+g(x-t)
def p_exact(x,t): return -f(x+t)+g(x-t)
# Initial conditions
def u0(x): return u_exact(x,0)
def p0(x): return p_exact(x,0)
Lutz Lehmann
  • 25,219
  • 2
  • 22
  • 51
  • Thank you very much, that helped a whole lot! Just one last little question: How did you get from `u(0.5*dt,x) = u(0,x) - 0.5*dt*u_t(0, x)` in the Taylor expansion to `u(0.5*dt,x) = u(0,x) - 0.5*dt*p_x(0,x)`? Where does that `p_x` come from? – jerremaier Nov 05 '20 at 12:59
  • That comes from the PDE, `u_t = -p_x`. If the PDE were different, this would give here, but also in the leapfrog steps, a different formula. – Lutz Lehmann Nov 05 '20 at 13:18