1

I am being asked to find the roots of f(x) = 5x(e^-mod(x))cos(x) + 1 . I have previously used the Durand-Kerner method to find the roots of the function x^4 -3x^3 + x^2 + x + 1 with the code shown below. I thought I could simply reuse the code to find the roots of f(x) but whenever I replace x^4 -3x^3 + x^2 + x + 1 with f(x) the program outputs nan for all the roots. What is wrong with my Durand-Kerner implementation and how do I go about modifying it to work for f(x)? I would be very grateful for any help.

#include <iostream>
#include <complex>
#include <math.h>

using namespace std;

typedef complex<double> dcmplx;

dcmplx f(dcmplx x)
{
    // the function we are interested in
    double a4 = 1;
    double a3 = -3;
    double a2 = 1;
    double a1 = 1;
    double a0 = 1;

    return (a4 * pow(x,4) + a3 * pow(x,3) + a2 * pow(x,2) + a1 * x + a0);
}


int main()
{

dcmplx p(.9,2);
dcmplx q(.1, .5);
dcmplx r(.7,1);
dcmplx s(.3, .5);

dcmplx p0, q0, r0, s0;

int max_iterations = 100;
bool done = false;
int i=0;

while (i<max_iterations && done == false)
{
    p0 = p;
    q0 = q;
    r0 = r;
    s0 = s;


p = p0 - f(p0)/((p0-q)*(p0-r)*(p0-s));
q = q0 - f(q0)/((q0-p)*(q0-r)*(q0-s));
r = r0 - f(r0)/((r0-p)*(r0-q)*(r0-s0));
s = s0 - f(s0)/((s0-p)*(s0-q)*(s0-r));

    // if convergence within small epsilon, declare done
    if (abs(p-p0)<1e-5 && abs(q-q0)<1e-5 && abs(r-r0)<1e-5 && abs(s-s0)<1e-5)
        done = true;

    i++;
}

cout<<"roots are :\n";
cout << p << "\n";
cout << q << "\n";
cout << r << "\n";
cout << s << "\n";
cout << "number steps taken: "<< i << endl;

return 0;
}

The only thing I have been changing so far is the dcmplx f function. I have been changing it to

dcmplx f(dcmplx x)
{
    // the function we are interested in
    double a4 = 5;

    double a0 = 1;

    return (a4 * x * exp(-x) * cos(x) )+ a0;
}
Ca01an
  • 19
  • 1
  • 10

2 Answers2

4

The Durand-Kerner method that you're using requires the function to be continuous on the interval you are working.

Here we ahve a discrepancy between the mathematical view and the limits of the numeric applications. I'd propose you to plot your function (typing the formula in google will give you a quick overview of course for the real part). You'll notice that:

  • there are an infinity of roots due to the periodicity of the cosinus.
  • due to the x*exp(-x) the absolute value quickly rises up beyond the maximum precision that a floating point number can hold.

To understand the consequences on your code, I invite you to trace the different iteration. You'll notice that p, r and s are converging very quicky while q is diverging (apparently on the track of one of the huge peak):

  • At the 2nd iteration q is already at 1e74
  • At 3rd iteration already beyond what a double can store.
  • As q is used in the calculation of p,r and s, the error is propagated to the other terms
  • At 5th iteration, all terms are at NAN
  • It then continues bravely through the 100 iterations

Perhap's you could make it work by choosing different starting points. If not, you'll have to use some other method and carefully select the interwall on which you're working.

Christophe
  • 68,716
  • 7
  • 72
  • 138
  • Any idea what other method I could use? – Ca01an Dec 14 '14 at 19:27
  • I'm not mathematician. But I would give the simple Newton-Raphson ( Xi+1 = Xi - f(Xi)/f'(Xi) ) a try. Or the Brent method http://en.wikipedia.org/wiki/Brent%27s_method – Christophe Dec 14 '14 at 19:59
  • I confirm that the Brent method with the algorithm exposed in wikipedia works well as long as you can give a point above and a point below the root. I've tried with doubles instead of complex and it found the root in 5 iteration – Christophe Dec 14 '14 at 21:31
2

You should have noted in your documentation of the Durand-Kerner method (invented by Karl Weierstrass around 1850) that it only applies to polynomials. Your second function is far from being a polynomial.

Indeed, because of the mod function it has to be declared as a nasty function for numerical methods. Most of them rely on the continuity of the given function, i.e., if the value is close to zero, there is a good chance that there is a root nearby and if the sign changes on an interval then there is a root in the interval. Even the most basic derivate-free methods as the bisection method or Brents method on the sophisticated end of that class pre-suppose these properties.

Lutz Lehmann
  • 25,219
  • 2
  • 22
  • 51