2

I've got a system of partial differential equations (PDEs), specifically the diffusion-advection-reaction-equation applied to heat transfer and convection, which I solve using finite difference method.
In my simulation environment I've got a multitude of different parts, like pipes, energy storages, heat exchangers etc... Depending on the part, each part's PDE can be 1D (as in most cases) or 2D (about 5% of the parts) or 3D (seldom). Because of these different shapes constructing a tridiagonal (or penta-, septa-...) for the whole system is too complicated.
This question is posted here on SO and not on computational science, since it is about how to use solving algorithms for PDEs and not about solving-methods for PDEs.

Thus currently each part has a diff function, which returns its differential for the given input temperatures. Since material properties etc. are temperature (and flow) dependant, the PDEs are non-linear, but considered as linear by lagging the coefficients (calculating temperature and flow dependent values with the start temperature at each step and for implicit methods recalculating them at each iteration). The shape of the differentials returned by the diff function is the same as the part's value array shape, for example for a 1D pipe with 10 grid points this results in an differential array of shape (10, ). Diffusion coefficients and other coefficients are considered as internal to the diff function and will not be available to the solver. So the solver will only know the differential, the current part temperature and the stepsize.

Is there any way to solve these PDEs in python only one step at a time using an algorithm which is dedicated to solving PDEs? (And an algorithm which is preferably part of scipy/numpy and even more preferably already supported by numba.)
Things I have considered so far:

  • scipy.integrate: Only feasible for ODEs, whereas a PDE may not be covered by this. Is that correct? At least I can't get good results with it for PDEs. Besides that scipy.integrate is not made for calculating only one single step.
  • np.linalg.solve: Seems like the best way to go, if you have tri-diagonal matrices with linear equations. But since I neither have tri-diagonal matrices, nor linear equations (linearized, but lagging the coefficients means having to update them during iterations). Is there any way to still use this without too much computational cost?
  • scipy.optimize: Feasible and easy to use. My minimum working example below will integrate a solution using this. But will this always work for PDEs and is this generally a correct way to solve PDEs implicitly or is this just working in this example and I am on the wrong track?
  • Own integration of a simple iterative function. Also shown in the example below. Same as with scipy.optimize: Is this just working in this simple example or will this work for PDEs in general?

Minimum working example

import time
from scipy import optimize

def diff(T):  # simply example differential function
    time.sleep(0.0001)  # dummy timer to slow down the calculation
    return np.sqrt(T)

def euler_bdf(y, yprev, h):
    return (y - yprev - h*diff(y))

def crank_nicolson(y, yprev, h):  # diff(yprev) will be outsourced for performance
    return (y - yprev - h / 2 * (diff(y) + diff(yprev)))

def iterate_eulerbdf(y0, h, rtol, diff):
    y = y0
    f = np.ones_like(y)
    i = 0
    while np.any(f > rtol):
        y_old = y
        y = y0 + h * diff(y)
        f = (y / y_old - 1)  # this will be replaced by least mean squares
        i += 1
    return y, i

def iterate_cranknicolson(y0, h, rtol, diff):
    y = y0
    diff_old = diff(y)
    f = np.ones_like(y)
    i = 0
    while np.any(f > rtol):
        y_old = y
        y = y0 + h / 2 * (diff_old + diff(y))
        f = (y / y_old - 1)
        i += 1
    return y, i

T0 = np.arange(10, 20).astype(float)  # starting temperature

eulerbdf_hybr = optimize.root(euler_bdf, T0, args=(T0, 1.), method='hybr', tol=1e-6)
eulerbdf_dfsane = optimize.root(euler_bdf, T0, args=(T0, 1.), method='df-sane', tol=1e-6)
eulerbdf_it = iterate_eulerbdf(T0, 1., 1e-6, diff)

cn_hybr = optimize.root(crank_nicolson, T0, args=(T0, 1.), method='hybr', tol=1e-6)
cn_dfsane = optimize.root(crank_nicolson, T0, args=(T0, 1.), method='df-sane', tol=1e-6)
cn_it = iterate_cranknicolson(T0, 1., 1e-6, diff)

TL,DR summary of the question

All of the results seem to be ok, but is this approach ok for real PDEs in general?
Which is the recommended way to do it?
Which is the most performant way to do it for a wide variety of PDEs of different shapes and sizes, ranging from (5,) to more than (100, 100, 100)? For scipy.integrate currently 'df-sane' seems to be the fastest, but I doubt this will be true for larger problems.
And especially: Is there any better way to do this?

Thanks in advance!

JE_Muc
  • 5,403
  • 2
  • 26
  • 41
  • Sorry, even the summary is too broad in my opinion. – mkrieger1 Jul 31 '18 at 12:38
  • @mkrieger1 Shouldn't a summary be broad? But what can I improve? I wanted to avoid writing down the discretizations of the PDEs. Would that help? @uhoh: That's where I posted my last questions considering this topic, but I thought this may be more appropriate for SO, since it involves solving algorithms implemented in `scipy/numpy`. – JE_Muc Jul 31 '18 at 12:44
  • For PDE's the first port of call is always separation of variables with pen and paper. – Paula Thomas Jul 31 '18 at 12:44
  • @PaulaThomas Yep, but this is for a simulation environment where this is not feasible, since it solves several million steps. Just having non-linear coefficients would also make this quite much impossible with pen and paper. – JE_Muc Jul 31 '18 at 12:54

0 Answers0