2

I have an RKF7(8) integrator whose output I've verified with several simple test functions. However, when I use it on the equation of motion I'm interested in, the local truncation errors are suddenly very small. For a timestep of around 1e-1, my errors are all around 1e-18 or 1e-19. For the simple test functions (so far, sine and an exponential), the errors are always reasonable, ie 1e-7 or so for the same timestep.

The only difference between the simple test functions and the problem function is that it's a huge chunk of code, like maybe 1000 different terms, some with relatively large exponents (like 9 or 10). Could that possibly affect the precision? Should I change my code to use long doubles?

David
  • 424
  • 3
  • 16
  • 1
    How do the errors behave when you change the step size by a factor of 2, h=0.05, h=0.2 etc? – Lutz Lehmann May 27 '20 at 16:56
  • Did you have also some vector cases among the test problems, for instance the simple circular motion or some orbit simulation for circular orbits? – Lutz Lehmann May 27 '20 at 17:11
  • Interesting. When I set the step size to 0.5, 0.25, and 0.2, the error quickly goes to 1e-11, which is still low, but the step size rapidly becomes >1. The sin test problem was technically a vector case, as I did it as two coupled 1st-order equations. Should I do a more complex case, like an orbit? – David May 27 '20 at 17:27
  • Ok, that seems right then for the method implementation. Looking for existing implementations I found one where the indentation in the loop over the stages was wrong, which might lead to errors that can only be seen in ODE systems. // What are the scales of the components that show this error, that is, what is the relative error? 1e-18 makes only sense in floating point if the component is of size 1e-4 or so. – Lutz Lehmann May 27 '20 at 17:36
  • With a method of order 7, the optimal step size is at `Lh=(2e-16)^(1/8)~0.01`. If the Lipschitz constant `L` of the system is small, then `h=0.1` can be in the region where the accumulation of floating point errors dominates the step error. In this range the error will be rather random, so large jumps like this would still be exceptional, but not unexpected. – Lutz Lehmann May 27 '20 at 17:36
  • the elements of the relative error (by which I assume you mean the difference between the 7th and 8th order solutions) are all of order 1e-19 (or 0 to within double precision) after the first step and then ~1e-11 for all further timesteps – David May 27 '20 at 17:43
  • No, that difference is an estimate for the local absolute error. You would have to divide by the components of the state vector to get the relative error. Do you start from zero or a system at rest? – Lutz Lehmann May 27 '20 at 17:47
  • I start from a nonzero initial condition, but whose elements are near unity, so when I take `abs(y+1_7 - y+1_8) / y` (for current state `y`, and 7-th and 8-th order solutions `y+1_7`, `y+1_8` ), the elements of the resulting vector are still ~1e-11 or ~1e-12 – David May 27 '20 at 17:55
  • @lutz-lehmann Do you have any thoughts on ways to mitigate this? Also, do you know why the very long equation of motion might induce this behavior? Sorry for all the questions! – David May 29 '20 at 11:48
  • Up to now I do not see anything that needs mitigation, do you have an indication that the computed results are actually wrong? It would give further insight to see a table or loglog plot of stepsize to global error for a larger range of step sizes. It might be that the ideal step size is indeed larger than 0.1, you would need a dense solution or other interpolation scheme to get values at a smaller target step size. – Lutz Lehmann May 29 '20 at 12:08
  • @lutz-lehmann Sorry, I should have mentioned: The step size quickly becomes greater than 1, after which it explodes in a few steps, but this only happens when integrating a huge equation. – David May 29 '20 at 12:43
  • This could be a problem in the step size control implementation and how it accounts for components of highly varying scales. Do you get similar problems if you apply this method to a sun-venus-earth solar system in km-s-units? (days-AU as unit base gives more balanced scales for position and velocities.) For sensible parameters see, e.g., https://stackoverflow.com/a/53870289/3088138 – Lutz Lehmann May 29 '20 at 13:10
  • I do not get similar problems for that system. One thing I do notice is that my system is a high-order expansion and contains some large exponents on the variables of integration. Could that have anything to do with it? Thank you very much for your responses; this has been a big help. – David May 30 '20 at 02:21
  • That sounds like a stiff problem. While in general the step size controller will notice this and regulate the step size down, sometimes down to a stand-still, there are also cases where stiff behavior is not noticeable with explicit embedded methods. There are some improvements with using embedded steps of other orders like in dopri853, and monitoring them for deviations from the expected error pattern, but in general best results are obtained using implicit methods. – Lutz Lehmann May 30 '20 at 04:24

1 Answers1

0

Very interesting question. The problem you are facing might be related to issues (or limitations) of the floating point arithmetic. Since your function contains coefficients in a wide numerical interval, it is likely that you have some loss of precision in your calculations. In general, these problems can come in the form of:

  • Overflow
  • Underflow
  • Multiplication and division
  • Adding numbers of very different magnitudes
  • Subtracting numbers of similar magnitudes

Overflow and underflow occur when the numbers you are dealing with are too large or too small with respect to the machine precision, and it would be my bet that this is not what happens in your system. Nevertheless, one must take into account that multiplication and division operations can lead to overflow and underflow. On the other hand, adding numbers of very different magnitudes (or subtracting numbers of similar magnitudes) can lead to severe loss of precision due to the roundoff errors. From my experience in optimization problems that involve large and small numbers, I would say this could be a reasonable explanation of the behavior of your integrator.

I have two suggestions for you. The fist one is of course increasing the precision of your numbers to the maximum available. This might help or not depending on how ill conditioned your problem is. The second one is to use a better algorithm to perform the sums in your numerical method. In contrast to the naive addition of all number sequentially, you could use a more elaborated strategy by dividing your sums into sub-sums, effectively reducing roundoff errors. Notable examples of these algorithms are the pairwise summation and the Kahan summation.

I hope this answer offers you some clues. Good luck!

panadestein
  • 1,241
  • 10
  • 21