0

I am attempting to implement equation 16 from this paper:

enter image description here

Equation 16 above is the viscous wave equation, and it is supposed to start big and die out over time, eventually approaching 0. However when I run my simulation it seems to blow up. If you look at the images below (iterations 0-4) it appears to be working correctly (i.e. the sinusoidal wave appears to be getting smaller, which is good). However, at ierations 5,6 and beyond it starts to blow up (you can see the y-axis representing pressure increasing by orders of magnitude).

Here is the output:

Iteration 0

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

Here is the code that uses Finite Difference Method:

import numpy as np
import math
import matplotlib
import matplotlib.pyplot as plt
import time

x_mesh_param = 1000
# function that increments time steps (solves for the next state of p)
# returns 'p2'
# assuming no boundary conditions because we only care about what a hydrophone
# receives.  Don't care about what happens to wave beyond hydrophone.
def iterate(p0,p1,x_mesh,dt,v,c0 ):

    # step 3 on pg. 9

    dx=x_mesh[1]-x_mesh[0]
    p2 = np.zeros(p1.shape) # this is where we store next state of p

    for i in range(1,len(x_mesh)-1):
        p_txx = (p1[i+1]-p0[i+1]-2*(p1[i]-p0[i])+p1[i-1]-p0[i-1])/ (dt* dx**2)
        p_xx  = (p1[i+1]-2*p1[i]+p1[i-1]) / (dx**2)
        p2[i] = (dt)**2 * (4.0/3 * v * p_txx + c0**2*p_xx)+2*p1[i]-p0[i]

    # what happens at p2[0] (begin) and p2[-1] (end)

    # forward difference (NO BOUNDARY conditions)
    p_txx = (p1[2]-p0[2]-2*(p1[1]-p0[1])+p1[0]-p0[0])/(dt*dx**2)
    p_xx = (p1[2]-2*p1[1]+p1[0]) / (dx**2)
    #p2[0] = (dt)**2 * (4.0/3 * v * p_txx + c0**2*p_xx)+2*p1[0]-p0[0]

    # taylor series
    #p2[0]=1/(1-1/dt+1/(2*(dt)**2))* ((1-1/dt+1/((dt)**2))*p2[1]-p2[2]/(2*(dt**2)))

    #p2[0] = p1[1] # NEUMANN 

    # if you comment out lines 34,37,39 you get Dirichlet Conditions

    # backward difference
    p_txx = (p1[-1]-p0[-1]-2*(p1[-2]-p0[-2])+p1[-3]-p0[-3])/(dt*dx**2)
    p_xx = (p1[-1]-2*p1[-2]+p1[-3]) / (dx**2)
    #p2[-1] = (dt)**2 * (4.0/3 * v * p_txx + c0**2*p_xx)+2*p1[-1]-p0[-1]





    # Dirichlet if line 46 commented out
    return p2

def firstIteration(p1,x_mesh,dt,v,c0 ):


    # step 3 on pg. 9

    dx=x_mesh[1]-x_mesh[0]
    p2 = np.zeros(p1.shape) # this is where we store next state of p

    for i in range(1,len(x_mesh)-1):
        p_txx = 0
        p_xx  = (p1[i+1]-2*p1[i]+p1[i-1]) / (dx**2)
        p2[i] = (dt)**2 * (4.0/3 * v * p_txx + c0**2*p_xx)+p1[i] # assuming p1[i]-p0[i]=0 (coming from assumption that d/dt (p(x,0)) =0 (initial cond. of motion of fluid)

    # what happens at p2[0] (begin) and p2[-1] (end)


    # forward difference (NO BOUNDARY conditions)
    p_txx = 0
    p_xx = (p1[2]-2*p1[1]+p1[0]) / (dx**2)
    #p2[0] = (dt)**2 * (4.0/3 * v * p_txx + c0**2*p_xx)+p1[0]

    # taylor series
    #p2[0]=1/(1-1/dt+1/(2*(dt)**2))* ((1-1/dt+1/((dt)**2))*p2[1]-p2[2]/(2*(dt**2)))

    #p2[0] = p1[1] # NEUMANN 

    # if you comment out lines 34,37,39 you get Dirichlet Conditions

    # backward difference
    p_txx = 0
    p_xx = (p1[-1]-2*p1[-2]+p1[-3]) / (dx**2)
    #p2[-1] = (dt)**2 * (4.0/3 * v * p_txx + c0**2*p_xx)+p1[-1]

    return p2

def simulate():
    states = []
    L=100 
    #dt=.02
    dt=0.001
    v = .001/998
    c0 = 1480

    x_mesh = np.linspace(0,L,x_mesh_param)


    state_initial = np.array([math.sin(x/20*math.pi) for x in x_mesh])
    states.append(state_initial)

    states.append(firstIteration(states[0], x_mesh,dt,v,c0 ))

    for i in range(50):
        new_state = iterate(states[-2], states[-1], x_mesh,dt,v,c0)
        states.append(new_state)

    return states



if __name__=="__main__":
    states = simulate()
    L=100
    x_mesh = np.linspace(0,L,x_mesh_param)

    counter=0
    for s in states:
        fig, ax = plt.subplots()
        ax.plot(x_mesh, s)

        ax.set(xlabel='distance (m)', ylabel='pressure (atm)',
               title='Pressure vs. distance, iteration = '+str(counter))
        ax.grid()

        plt.show()
        print counter
        counter = counter + 1

You will see there is a main program that calls simulate(). Then simulate() calls firstIteration() which tries to handle boundary condition. Then the rest is taken care of by iterate(). I am basically using central, forward, and backward differences to compute derivatives of wave equation.

The main program simply prints out an entire figure at different time steps.

user1068636
  • 1,871
  • 7
  • 33
  • 57
  • 1
    At the bottom of [Stack Overflow's page on creating a minimal, reproducible sample](https://stackoverflow.com/help/minimal-reproducible-example), there's a link to a blog post called "How to debug small programs," which you may find very helpful, especially the part about so-called "rubber duck" debugging. – jjramsey Jun 16 '20 at 17:36
  • Unfortunately, rubber-duck debugging doesn't work well for subtle numerical problems. – Han-Kwang Nienhuys Jun 16 '20 at 19:23
  • @Han-KwangNienhuys. You'd be very very surprised – Mad Physicist Jun 16 '20 at 21:17

1 Answers1

3

You implemented a numerical solver for a linear (no product terms of p) partial differential equation that seems to work for a number of time steps, and then explodes with rapid oscillations.

What happens mathematically is that your discretized system of equations has solutions that grow exponentially over time. Numerical noise at the least significant digits of your floating-point numbers will grow exponentially until it takes over the solution that you're looking for.

If P is a vector (array of 2N values) consisting of the N pressure values at the current and previous time steps, all the adding and subtracting of values in your code can be rewritten in matrix form (representing one time step):

P := D @ P

where D is a matrix of shape (2N, 2N). If P0 was the initial state of the system, then the state of the system after n time steps would be

Pn = np.linalg.matrix_power(D, n) @ P0

Here, matrix_power is matrix exponentiation, i.e., matrix_power(D, 3) == D @ D @ D. If D has eigenvalues with absolute value > 1, then there will be exponentially growing solutions. Your data suggests that the largest eigenvalue is around 1000, which means that its has grown by a factor 1e+18 over 6 time steps, whereas the physically relevant eigenvector is around 0.9. Note that floating-numbers has a precision around 1e-16.

You can check for the eigenvalues using np.linalg.eigvals, preferably with a small matrix, for example N=100. If you're lucky, then reducing the step sizes dx and dt may be enough to get the eigenvalues into below-1 territory.

Quick and dirty solution

This is quick to implement, but not very efficient. You could try to eliminate the large eigenvalues from D:

import numpy as np
evals, evecs = np.linalg.eig(D)
evals_1 = np.where(np.abs(evals) > 1, 0, evals)
D_1 = evecs @ np.diag(evals_1) @ np.linalg.inv(evecs)

For example, with a small matrix (eigenvalues 0.9 and 1000):

D = np.array([[500.45, -499.55], 
              [-499.55, 500.45]])

# D_1 calculated as above.
D_1 = np.array([[0.45, 0.45],
                [0.45, 0.45]])

P0 = [1, 1+8e-16]
powm = np.linalg.matrix_power

ns = np.arange(7)
Ps_D = np.array([powm(D, n) @ P0 for n in ns])
Ps_D1 = np.array([powm(D_1, n) @ P0 for n in ns])

import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.semilogy(ns, Ps_D[:, 1], 'ro-', label='Using D')
ax.semilogy(ns, Ps_D1[:, 1], 'bs-', label='Using D_1')
ax.set_xlabel('Iteration number')
ax.set_ylabel('P[1]')
ax.grid()
ax.legend()
fig.show()

Time evolution with good and bad eigenvalues

If you went through the trouble of finding the eigenvalues and eigenvectors (which can be slow for large matrices), you can speed up the subsequent calculations quite a bit by replacing matpow(D_1, n) by

Pn = evecs @ np.diag(evals_1**n) @ np.linalg.inv(evecs) @ P0

Quick sanity check for tuning step sizes

It is possible to judge whether it is likely that there are eigenvalues greater than one by setting your initial state to

P = np.zeros(N)
P[N//4] = 1

and executing one single time step. If this results in values >1 in P, you are likely to have too large eigenvalues. (The new value for P is actually the column D[:, N//4] of the matrix).

Implicit methods

Rather than bluntly zeroing large eigenvalues as described above, it may be better to resort to an implicit method. This requires that you write out the D matrix. If P is the present state and Pnew is the state for the following time step, then you start from the equation

Pnew = D @ (0.5*(P + Pnew))

Solve this matrix equation for Pnew:

# Pnew = inv(1 - 0.5 D) (0.5 D) P
nD = D.shape[0]
D_2 = np.linalg.inv(np.eye(nD) - 0.5*D) @ (0.5*D)

# time step:
Pnew = D_2 @ P

This is the Crank-Nicolson method. It requires that you construct the D matrix. Applying to the same example as above gives:

D_2 = np.array([[-0.09191109,  0.91009291],
                [ 0.91009291, -0.09191109]])
evals = np.array[ 0.81818182, -1.00200401]

This reduces the largest eigenvalue from 1000 to -1.002, which is better, but still larger than 1 (by absolute value), so starting from a rounding error at 1e-15, it will overtake your desired solution after about 34k time steps. Moreover, it reduces the 0.9 eigenvalue to 0.8. You'd need to find out which one is a better description of the physics. Unfortunately, numerically solving partial differential equations is hard.

EDIT: Lutz Lehmann pointed out that working with inverses of large matrices (the result of a fine grid in x) is not a good idea. Note that D is a sparse matrix, better represented as a scipy.sparse.csr_matrix, csc_matrix, or dia_matrix than np.array. You'd then solve

(1 - 0.5*D) @ P_new = 0.5*D @ P

which is a matrix equation in the form Ax=b with x and b vectors:

A = scipy.sparse.identity(nD) - 0.5*D

for every time step:
   b = D @ (0.5*P)
   P = scipy.sparse.linalg.spsolve(A, b)

Note that spsolve is still likely to be slow. It may help to arrange P with interleaved present and previous values of the pressure, so that D is band-diagonal and can be stored as a sparse dia_matrix.

Edit:

The most efficient approach would therefore be like this:

import scipy.sparse as sp

n = 200 # Example - number of x points
nD = 2*n # matrix size
D = sp.dok_matrix((nD, nD))

# Initialize pressures
P = np.zeros(nD)

# ... initialize P and nonzero matrix elements here ...

# Prepare matrices
D = sp.csc_matrix(D) # convert to efficient storage format
A = sp.identity(nD) - 0.5*D
A_lu = sp.linalg.splu(A)

for _i in range(num_time_steps):
   b = D @ (0.5*P)
   P = A_lu.solve(b)
Han-Kwang Nienhuys
  • 3,084
  • 2
  • 12
  • 31
  • Thanks for your response. So I think you are saying I need to convert the lines of code in the "iterate" and "firstIteration" function that compute p_txx, p_xx, and p2 into matrix multiplication ? Basically it needs to be a linear transformation like Ax=b, where A (or D in your response) is the matrix the transforms into new values. Once I have A defined, then you say I need to compute the eigenvalues and make sure the real part of all of them is less than 1? If not, try to reduce the size of dx and dt? Is this correct? – user1068636 Jun 17 '20 at 02:26
  • Also, what happens if I reduce dt and dx to the point that I have real eigenvalues of A (or D) less than 1, but then the simulation takes forever to run? Is there a better way to do this? – user1068636 Jun 17 '20 at 02:27
  • First comment: correct. Second comment: I added a paragraph about implicit methods. In any case, partial differential equations and their numerical solutions are difficult. If the suggestions here don't work, you could ask at https://math.stackexchange.com/ , but it's likely that better solutions are a lot of work to implement on your own. – Han-Kwang Nienhuys Jun 17 '20 at 07:33
  • For small space steps dx, you do not compute the inverse matrix. You construct a sparse solver object (containing a sparse matrix factorization, from scipy.sparse.linalg) once and use its solve method in all integration steps. Use higher order methods or the matrix exponential or a combination to be able to use larger time steps dt. – Lutz Lehmann Jun 17 '20 at 08:58
  • @LutzLehmann I've updated the answer with `spsolve`, but the rest of your suggestions are too far out of my comfort zone to explain. – Han-Kwang Nienhuys Jun 17 '20 at 11:16
  • [Numerical instabilities in the viscous wave equation](https://math.stackexchange.com/questions/832642/numerical-instabilities-in-the-viscous-wave-equation) on math.stackexchange.com discusses the general case (3D and with spatially varying coefficients). This seems to be a very difficult PDE. – Han-Kwang Nienhuys Jun 17 '20 at 11:28
  • The next stage would be to add `A_LU = scipy.sparse.linalg.splu(A)` after constructing `A` and then use `P = A_LU.solve(b)` inside the loop. This avoids the re-factorization of `A` in every iteration of the loop. – Lutz Lehmann Jun 22 '20 at 09:44