10

I'm looking for a specialised algorithm to find positive real solutions to quartic equations with real coefficients (also know as bi-quadratic or polynomial equations of order 4). They have the form:

a4 x4 + a3 x3 +a2 x2 +a1 x + a0 = 0

with a1, a2,... being real numbers.

It's supposed to run on a microcontroller, which will need to do quite a lot of those calculations. So performance is an issue. That's why I'm looking for a specialised algorithm for positive solutions. If possible I'd like it to compute the exact solutions.

I know there is a general way to compute the solution of a quartic equation but it is rather involved in terms of computation.

Can someone point me in the right direction?

Edit:

Judging from the answers: Some seem to have misunderstood me (though I was pretty clear about it). I know of the standard ways of solving quartic equations. They don't do it for me - neither they fit in the memory nor are they sufficiently fast. What I would need is a high accuracy highly efficient algorithm to find only real solutions (if that helps) to quartic equations with real coefficients. I'm not sure there is such an algorithm, but I thought you guys might know. P.S.: The downvotes didn't come from me.

Community
  • 1
  • 1
con-f-use
  • 3,772
  • 5
  • 39
  • 60
  • 4
    Seriously, have you actually tried Newton's method (aka Newton-Raphson) or even plain old binary search, and found it not to fit in memory or not fast enough? This is what many calculators use and books on computer algebra recommend, and I'd be really curious exactly what sort of application you have where Newton's method isn't fast enough and you expect there exists something even better. – ShreevatsaR Jul 03 '11 at 14:39
  • 1
    *bi-quadratic* is not the same as *quartic*: a bi-quadratic equation is a quartic equation in which there are no terms of odd order. – lhf Jul 03 '11 at 14:42
  • 2
    Can you explain your application a bit more? Can you use floating point? Is it for a lookup table or a raytracer? – whoplisp Jul 03 '11 at 16:13
  • If you can reliably refactor the quartic equation into a product of two quadratics you can use the standard formula from high school to solve for the roots of each one. The key is "reliably". – duffymo Jul 04 '11 at 01:20
  • Are the a_i constants or do they vary at runtime? – starblue Jul 11 '11 at 08:47

7 Answers7

6

This is one of those situations where it is probably easier to find all the roots using complex arithmetic than to only find the positive real roots. And since it sounds like you need to find multiple roots at once, I would recommend using the Durand-Kerner method, which is basically a refinement of the method of Weierstrass:

http://en.wikipedia.org/wiki/Durand%E2%80%93Kerner_method

Weierstrass' method is in turn a refinement of Newton's method that solves for for all the roots of the polynomial in parallel (and it has the big advantage that it is brain-dead easy to code up). It converges at about quadratic rate in general, though only linearly for multiple roots. For most quartic polynomials, you can pretty much nail the roots in just a few iterations. If you need a more general purpose solution, then you should use instead use Jenkins-Traub:

http://en.wikipedia.org/wiki/Jenkins%E2%80%93Traub_method

This is faster for higher degree polynomials, and basically works by converting the problem into finding the eigenvalues of the companion matrix:

http://en.wikipedia.org/wiki/Companion_matrix

EDIT: As a second suggestion, you could also try using the power method on the companion matrix. Since your equation has only non-negative coefficients, you may find it useful to apply the Perron-Frobenius theorem to the companion matrix. At minimal, this certifies that there exists at least one non-negative root:

http://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem

as-cii
  • 12,819
  • 4
  • 41
  • 43
Mikola
  • 9,176
  • 2
  • 34
  • 41
  • FYI there is an open source c++ implementation (BSD license) of the Jenkins-Traub algorithm called [RPOLY++](https://github.com/sweeneychris/RpolyPlusPlus) – kip622 Sep 20 '15 at 15:05
2

Yes, there are general ways. You need a root finding algorithm, like bracketing and bisection, secant, false position, Ridder, Newton-Raphson, deflation, Muller, Laguerre, or Jenkins-Traub - did I leave anyone out?

Check out "Numerical Recipes" for details.

duffymo
  • 305,152
  • 44
  • 369
  • 561
1

I realize this answer is rather late, but I think a good alternative to the methods already mentioned is the TOMS Algorithm 326, which is based on the paper "Roots of Low Order Polynomials" by Terence R.F.Nonweiler CACM (April 1968).

This is an algebraic solution to 3rd and 4th order polynomials that is reasonably compact and fast. It is certainly much simpler and much faster than Jenkins Traub and doesn't require iteration.

Don't use the TOMS code however, as it was done rather poorly.

A rewrite of this algorithm is given here, which was rigorously tested for accuracy.

user5108_Dan
  • 379
  • 1
  • 9
1

Here is the C/C++ code that I wrote. But it only gives you real roots of the equation

Here is the C/C++ code that I wrote. But it only gives you real roots of the equation

#include <stdio.h>
#include <iostream>
#include <math.h>
/*--------------------------------------------

 --------------------------------------------*/
double cubic(double b,double c,double d)
{
    double p=c-b*b/3.0;
    double q=2.0*b*b*b/27.0-b*c/3.0+d;

    if(p==0.0) return pow(q,1.0/3.0);
    if(q==0.0) return 0.0;

    double t=sqrt(fabs(p)/3.0);
    double g=1.5*q/(p*t);
    if(p>0.0)
    return -2.0*t*sinh(asinh(g)/3.0)-b/3.0;


    if(4.0*p*p*p+27.0*q*q<0.0)
    return 2.0*t*cos(acos(g)/3.0)-b/3.0;

    if(q>0.0)
    return -2.0*t*cosh(acosh(-g)/3.0)-b/3.0;

    return 2.0*t*cosh(acosh(g)/3.0)-b/3.0;
}
/*--------------------------------------------

 --------------------------------------------*/
int quartic(double b,double c,double d,double e,double* ans)
{

    double p=c-0.375*b*b;
    double q=0.125*b*b*b-0.5*b*c+d;
    double m=cubic(p,0.25*p*p+0.01171875*b*b*b*b-e+0.25*b*d-0.0625*b*b*c,-0.125*q*q);
    if(q==0.0)
    {
        if(m<0.0) return 0;
        int nroots=0;
        double sqrt_2m=sqrt(2.0*m);
        if(-m-p>0.0)
        {
            double delta=sqrt(2.0*(-m-p));
            ans[nroots++]=-0.25*b+0.5*(sqrt_2m-delta);
            ans[nroots++]=-0.25*b-0.5*(sqrt_2m-delta);
            ans[nroots++]=-0.25*b+0.5*(sqrt_2m+delta);
            ans[nroots++]=-0.25*b-0.5*(sqrt_2m+delta);
        }

        if(-m-p==0.0)
        {
            ans[nroots++]=-0.25*b-0.5*sqrt_2m;
            ans[nroots++]=-0.25*b+0.5*sqrt_2m;
        }

        return nroots;
    }

    if(m<0.0) return 0;
    double sqrt_2m=sqrt(2.0*m);
    int nroots=0;
    if(-m-p+q/sqrt_2m>=0.0)
    {
        double delta=sqrt(2.0*(-m-p+q/sqrt_2m));
        ans[nroots++]=0.5*(-sqrt_2m+delta)-0.25*b;
        ans[nroots++]=0.5*(-sqrt_2m-delta)-0.25*b;
    }

    if(-m-p-q/sqrt_2m>=0.0)
    {
        double delta=sqrt(2.0*(-m-p-q/sqrt_2m));
        ans[nroots++]=0.5*(sqrt_2m+delta)-0.25*b;
        ans[nroots++]=0.5*(sqrt_2m-delta)-0.25*b;
    }

    return nroots;
}
/*--------------------------------------------

 --------------------------------------------*/
int main(int nargs,char* args[])
{
    if(nargs!=6)
    {
        printf("5 arguments are needed\n");
        return EXIT_FAILURE;
    }
    double a=atof(args[1]);
    double b=atof(args[2]);
    double c=atof(args[3]);
    double d=atof(args[4]);
    double e=atof(args[5]);
    if(a==0.0)
    {
        printf("1st argument should be nonzero\n");
        return EXIT_FAILURE;
    }

    int nroots;
    double ans[4];
    nroots=quartic(b/a,c/a,d/a,e/a,ans);
    if(nroots==0)
        printf("Equation has no real roots!\n");
    else
    {
        printf("Equation has %d real roots: ",nroots);
        for(int i=0;i<nroots-1;i++) printf("%.16lf, ",ans[i]);
        printf("%.16lf\n",ans[nroots-1]);
    }

    return EXIT_SUCCESS;
}
Lawless
  • 75
  • 1
  • 5
  • This is short and simple and, more importantly, it seems to work based on my limited testing. – JohnV2 Jan 31 '19 at 18:35
  • 1
    It's not working for all inputs. Example polynomial a=0.0625, b=0, c=-0.21, d=0, e=0.17 gives ±1.335, whereas the actual answer is ±1.16619 and ±1.41421. See [answer at wolfram alpha](https://www.wolframalpha.com/input/?i=0.0625*t%5E4+%2B+0*t%5E3+%2B+-0.21*t%5E2+%2B+0*t+%2B+0.17) (I appreciate the solution and I'm not complaining. Just a heads up for other users) – Johannes Hoff Mar 31 '20 at 15:14
  • You are right. There was a bug for a special case. I updated the code. – Lawless Apr 01 '20 at 20:12
1

Can you supply good start values to ensure that you always find all solutions. Newtons method would converge fast.

I checked in Maxima:

solve(a*x^4+b*x^3+c*x^2+d*x+c=0,x);

The solution looks indeed horrible. You can easily run into stability problems. This happens whenever you subtract two floating point numbers that have close values.

However, if the coefficients are constant you can just implement the direct formula. You can get the solution either by installing maxima or you can enter the equation on wolframalpha.com

whoplisp
  • 2,508
  • 16
  • 19
  • 1
    A common cause of stability problems from cancellation would be when there are two roots that are very close to each other. –  Jul 03 '11 at 14:33
1

Take a look at Ferrari's method. It involves quite a bit of computation, however, but may serve your needs.

Murat Derya Özen
  • 2,154
  • 8
  • 31
  • 44
  • I know of the standard methods to solve those equation. I thought there might be a more efficient algorithm for computing only POSITIV solutions to equations with real coefficients. That's what I asked. – con-f-use Jul 03 '11 at 14:07
  • NB, the link has changed, it is now https://en.wikipedia.org/wiki/Quartic_function#Ferrari.27s_solution – Kyle Kanos Dec 09 '15 at 15:59
1

No. There is no magic way to find the roots of a 4th order polynomial equation, at least not without doing the work. Yes, there is a formula for 4th order polynomials that involves complex arithmetic, but it will return all the roots, complex, real positive, negative. Either use an iterative scheme, or do the algebra.

You are lucky there is an analytical solution at all. Had you a 5th order polynomial, that would not even exist.

  • 1
    I think you got the number there off by one. Quartics are solvable by radicals, and you could even solve it exactly in rational arithmetic + some field extensions. On the other hand, Galois theory tells us that this method only breaks down for **5th** order or higher polynomials. – Mikola Jun 02 '12 at 20:45
  • @Mikola - Apparently you did not read what I said. I said, that If the polynomial was 5th order, it would be completely impossible, and that for a 4th order polynomial you must still go through the algebra. I did NOT say that a 4th order polynomial was unsolvable, merely that there was no trivial solution that would yield only the positive solutions. –  Jun 03 '12 at 02:37
  • The revised version is a bit more clear, but the phrase "Even if all you want are the positive real roots, it does not exist." threw me. I guess I was not sure what you meant by this, because it sounds like you were saying that no solution by radicals exists for the quartic (which is false). – Mikola Jun 04 '12 at 19:05