4

The problem:

I am trying to solve this diffrential equation:

K[x_, x1_] := 1;
NDSolve[{A''[x] == Integrate[K[x, x1] A[x1], {x1, 0, 1}], 
         A[0] == 0, A'[1] == 1}, A[x], x]

and I'm getting errors (Function::slotn and NDSolve::ndnum)
(it should return a numeric function that is equal to 3/16 x^2 + 5/8 x)

I am looking for a way to solve this differential equation: Is there a way to write it in a better form, such that NDSolve will understand it? Is there another function or package that can help?

Note 1: In my full problem, K[x, x1] is not 1 -- it depends (in a complex way) on x and x1.
Note 2: Naively deriving the two sides of the equation with respect to x won't work, because the integral limits are definite.

My first impression:

It seems that Mathematica doesn't like me referencing a point in A[x] -- the same errors occur when I'm doing this simplified version:

NDSolve[{A''[x] == A[0.5], A[0] == 0, A'[1] == 1}, A[x], x]

(it should return a numeric function that is equal to 2/11 x^2 + 7/11 x)

In this case one can avoid this problem by analytically solving A''[x] == c, and then finding c, but in my first problem it seems to not work -- it only transform the differential equation to an integral one, which (N)DSolve doesn't solve afterwards.

Community
  • 1
  • 1
tsvikas
  • 16,004
  • 1
  • 22
  • 12
  • 1
    Your equation is an integro-differential, not just differential. I don't think `NDSolve` ever claimed to be able to solve integro - differential equations. So, unless you manage to reduce it to a differential equation (get rid of the integral), it looks like you are out of luck. Another possibility would be to integrate twice over `x` (since you will need to only integrate the kernel `K`) - if this is possible (convergence etc), you will end up with the integral equation and can attempt to solve that. Such integration will introduce 2 arbitrary constants which you fix with initial conditions. – Leonid Shifrin Aug 07 '11 at 19:10
  • Thanks for the comment. Several points: 1 - I know that `NDSolve` have problems with i.d. equations. I was hoping to find a diffrent syntex, or a different function/package, or any other solution for it. – tsvikas Aug 07 '11 at 19:37
  • 2 - Is there a way to solve the second example, without using the `c`? It seems to me like a simpler example, and I included it specifically because it doesn't have an integral. – tsvikas Aug 07 '11 at 19:38
  • 3 - I can indeed get an equivalent integral equation - but I don't know any _Mathematica_ Function that solves such equations. Do you know such one? (I'm currently bypassing and solving with `FixedPoint`, but I'm still limited with it) – tsvikas Aug 07 '11 at 19:39
  • 1
    You seem to expect that NDSolve returns a symbolic solution to the differential equation (you name the expected functions twice). AFAIK NDSolve returns an InterpolatingFunction object. For symbolic solutions you should use DSolve instead. – Sjoerd C. de Vries Aug 07 '11 at 21:58
  • 1
    I'm aware that `NDSolve` return an `InterpolationFunction`. I named the expected results to assist potential helpers who want to check their approach. I'll make it more clear in the question – tsvikas Aug 08 '11 at 07:16

3 Answers3

7

I can suggest a way to reduce your equation to an integral equation, which can be solved numerically by approximating its kernel with a matrix, thereby reducing the integration to matrix multiplication.

First, it is clear that the equation can be integrated twice over x, first from 1 to x, and then from 0 to x, so that:

enter image description here

We can now discretize this equation, putting it on a equidistant grid:

enter image description here

Here, the A[x] becomes a vector, and the integrated kernel iniIntK becomes a matrix, while integration is replaced by a matrix multiplication. The problem is then reduced to a system of linear equations.

The easiest case (that I will consider here) is when the kernel iniIntK can be derived analytically - in this case this method will be quite fast. Here is the function to produce the integrated kernel as a pure function:

Clear[computeDoubleIntK]
computeDoubleIntK[kernelF_] :=
  Block[{x, x1},
   Function[
    Evaluate[
      Integrate[
         Integrate[kernelF[y, x1], {y, 1, x}] /. x -> y, {y, 0, x}] /. 
         {x -> #1, x1 -> #2}]]];

In our case:

In[99]:= K[x_,x1_]:=1;

In[100]:= kernel = computeDoubleIntK[K]
Out[100]= -#1+#1^2/2&

Here is the function to produce the kernel matrix and the r.h,s vector:

 computeDiscreteKernelMatrixAndRHS[intkernel_, a0_, aprime1_ , 
    delta_, interval : {_, _}] :=
  Module[{grid, rhs, matrix},
    grid = Range[Sequence @@ interval, delta];
    rhs = a0 + aprime1*grid; (* constant plus a linear term *)
    matrix = 
      IdentityMatrix[Length[grid]] - delta*Outer[intkernel, grid, grid];
    {matrix, rhs}]

To give a very rough idea how this may look like (I use here delta = 1/2):

In[101]:= computeDiscreteKernelMatrixAndRHS[kernel,0,1,1/2,{0,1}]

Out[101]= {{{1,0,0},{3/16,19/16,3/16},{1/4,1/4,5/4}},{0,1/2,1}}

We now need to solve the linear equation, and interpolate the result, which is done by the following function:

Clear[computeSolution];
computeSolution[intkernel_, a0_, aprime1_ , delta_, interval : {_, _}] :=
With[{grid = Range[Sequence @@ interval, delta]},
  Interpolation@Transpose[{
    grid,
    LinearSolve @@ 
      computeDiscreteKernelMatrixAndRHS[intkernel, a0, aprime1, delta,interval]
  }]]

Here I will call it with a delta = 0.1:

In[90]:= solA = computeSolution[kernel,0,1,0.1,{0,1}]
Out[90]= InterpolatingFunction[{{0.,1.}},<>] 

We now plot the result vs. the exact analytical solution found by @Sasha, as well as the error:

enter image description here

I intentionally chose delta large enough so the errors are visible. If you chose delta say 0.01, the plots will be visually identical. Of course, the price of taking smaller delta is the need to produce and solve larger matrices.

For kernels that can be obtained analytically, the main bottleneck will be in the LinearSolve, but in practice it is pretty fast (for matrices not too large). When kernels can not be integrated analytically, the main bottleneck will be in computing the kernel in many points (matrix creation. The matrix inverse has a larger asymptotic complexity, but this will start play a role for really large matrices - which are not necessary in this approach, since it can be combined with an iterative one - see below). You will typically define:

intK[x_?NumericQ, x1_?NumericQ] := NIntegrate[K[y, x1], {y, 1, x}]
intIntK[x_?NumericQ, x1_?NumericQ] := NIntegrate[intK[z, x1], {z, 0, x}]

As a way to speed it up in such cases, you can precompute the kernel intK on a grid and then interpolate, and the same for intIntK. This will however introduce additional errors, which you'll have to estimate (account for).

The grid itself needs not be equidistant (I just used it for simplicity), but may (and probably should) be adaptive, and generally non-uniform.

As a final illustration, consider an equation with a non-trivial but symbolically integrable kernel:

In[146]:= sinkern =  computeDoubleIntK[50*Sin[Pi/2*(#1-#2)]&]

Out[146]= (100 (2 Sin[1/2 \[Pi] (-#1+#2)]+Sin[(\[Pi] #2)/2] 
     (-2+\[Pi] #1)))/\[Pi]^2&

In[157]:= solSin = computeSolution[sinkern,0,1,0.01,{0,1}]
Out[157]= InterpolatingFunction[{{0.,1.}},<>]

enter image description here

Here are some checks:

In[163]:= Chop[{solSin[0],solSin'[1]}]
Out[163]= {0,1.}

In[153]:= 
diff[x_?NumericQ]:=
  solSin''[x] - NIntegrate[50*Sin[Pi/2*(#1-#2)]&[x,x1]*solSin[x1],{x1,0,1}];

In[162]:= diff/@Range[0,1,0.1]

Out[162]=  {-0.0675775,-0.0654974,-0.0632056,-0.0593575,-0.0540479,-0.0474074,
   -0.0395995,-0.0308166,-0.0212749,-0.0112093,0.000369261}

To conclude, I just want to stress that one has to perform a careful error - estimation analysis for this method, which I did not do.

EDIT

You can also use this method to get the initial approximate solution, and then iteratively improve it using FixedPoint or other means - in this way you will have a relatively fast convergence and will be able to reach the required precision without the need to construct and solve huge matrices.

Leonid Shifrin
  • 22,449
  • 4
  • 68
  • 100
  • Holy Cow! this is a lot of excellent work. I've used already a method with `FixedPoint` on the integral equation, and this seems like an awesome addition to my arsenal. – tsvikas Aug 08 '11 at 18:44
  • Ended up using a combination of this method and of: `f[A_] := FunctionInterpolation[x - Integrate[Integrate[Integrate[K[x2, x3] A[x3], {x3, 0, 1}], {x2, x1, 1}], {x1, 0, x}], {x, 0, 1}]; percision = 0.003; maxiterations = 100000; sol = FixedPoint[f, FunctionInterpolation[z, {z, 0, 1}], maxiterations, SameTest -> (N[Integrate[(#1[x] - #2[x])^2, {x, 0, 1}]] < percision N[Integrate[(#1[x])^2, {x, 0, 1}]] &)]`. thanks for your useful code! – tsvikas Aug 16 '11 at 12:03
  • @j0ker5 Thanks for the accept. I am happy it worked for you. It's a shame I was the only one to think that the question deserves a vote. – Leonid Shifrin Aug 16 '11 at 12:08
4

This is complementary to Leonid Shifrin's approach. We start with a linear function that interpolates the value and first derivative at the starting point. We use that in the integration with the given kernel function. We can then iterate, using each previous approximation in the integrated kernel that is used to make the next approximation.

I show an example below, using a more complicated kernel than just a constant function. I'll take it through two iterations and show tables of discrepancies.

kernel[x_, y_] := Sqrt[x]/(y^2 + 1/5)*Sin[x^2 + y]
intkern[x_?NumericQ, aa_] := 
 NIntegrate[kernel[x, y]*aa[y], {y, 0, 1}, MinRecursion -> 2, 
  AccuracyGoal -> 3]

Clear[a];
a0 = 0;
a1 = 1;
a[0][x_] := a0 + a1*x

soln1 = a[1][x] /. 
   First[NDSolve[{(a[1]^\[Prime]\[Prime])[x] == intkern[x, a[0], y], 
      a[1][0] == a0, a[1][1] == a1}, a[1][x], {x, 0, 1}]];
a[1][x_] = soln1;

In[283]:= Table[a[1]''[x] - intkern[x, a[1]], {x, 0., 1, .1}]

Out[283]= {4.336808689942018*10^-19, 0.01145100326794241, \
0.01721655945379122, 0.02313249302884235, 0.02990900241909161, \
0.03778448183557359, 0.04676409320217928, 0.05657128568058478, \
0.06665818935524814, 0.07624149919589895, 0.08412643746245929}

In[285]:= 
soln2 = a[2][x] /. 
   First[NDSolve[{(a[2]^\[Prime]\[Prime])[x] == intkern[x, a[1]], 
      a[2][0] == a0, a[2][1] == a1}, a[2][x], {x, 0, 1}]];
a[2][x_] = soln2;

In[287]:= Table[a[2]''[x] - intkern[x, a[2]], {x, 0., 1, .1}]

Out[287]= {-2.168404344971009*10^-19, -0.001009606971360516, \
-0.00152476679745811, -0.002045817184941901, -0.002645356229312557, \
-0.003343218015068372, -0.004121109614310836, -0.004977453722712966, \
-0.005846840469889258, -0.006731367269472544, -0.007404971586975062}

So we have errors of less than .01 at this stage. Not too bad. One drawback is that it was fairly slow to get the second approximation. There may be ways to tune NDSolve to improve on that.

This is complementary to Leonid's method for two reasons.

(1) If this did not converge well because the initial linear approximation was not sufficiently close to the true result, one might instead begin with an approximation found by a finite differencing scheme. That would be akin to what he did.

(2) He pretty much indicated this himself, as a method that might follow his and produce refinements.

Daniel Lichtblau

Daniel Lichtblau
  • 6,854
  • 1
  • 23
  • 30
0

The way your equation is currently written A''[x] == const, and than constant is independent of x. Hence the solution always has the form of quadratic polynomial. Your problem then reduces to a solving for indeterminate coefficients:

In[13]:= A[x_] := a2 x^2 + a1 x + a0;

In[14]:= K[x_, x1_] := 1;

In[16]:= Solve[{A''[x] == Integrate[K[x, x1] A[x1], {x1, 0, 1}], 
  A[0] == 0, A'[1] == 1}, {a2, a1, a0}]

Out[16]= {{a2 -> 3/16, a1 -> 5/8, a0 -> 0}}

In[17]:= A[x] /. First[%]

Out[17]= (5 x)/8 + (3 x^2)/16
Sasha
  • 5,935
  • 1
  • 25
  • 33
  • Thanks. The problem with that is that `A''[x] == const` only when `K` does not depend on `x` and `x1`. Unfortunately, this is only true for the simplified example. (but I will try to see if any similar approach can be valid in my case) – tsvikas Aug 07 '11 at 19:55