6

I would like to compare different methods of finding roots of functions in python (like Newton's methods or other simple calc based methods). I don't think I will have too much trouble writing the algorithms

What would be a good way to make the actual comparison? I read up a little bit about Big-O. Would this be the way to go?

Cody Gray - on strike
  • 239,200
  • 50
  • 490
  • 574
Meo
  • 161
  • 1
  • 2

7 Answers7

6

The answer from @sarnold is right -- it doesn't make sense to do a Big-Oh analysis.

The principal differences between root finding algorithms are:

  • rate of convergence (number of iterations)
  • computational effort per iteration
  • what is required as input (i.e. do you need to know the first derivative, do you need to set lo/hi limits for bisection, etc.)
  • what functions it works well on (i.e. works fine on polynomials but fails on functions with poles)
  • what assumptions does it make about the function (i.e. a continuous first derivative or being analytic, etc)
  • how simple the method is to implement

I think you will find that each of the methods has some good qualities, some bad qualities, and a set of situations where it is the most appropriate choice.

Raymond Hettinger
  • 216,523
  • 63
  • 388
  • 485
1

Big O notation is ideal for expressing the asymptotic behavior of algorithms as the inputs to the algorithms "increase". This is probably not a great measure for root finding algorithms.

Instead, I would think the number of iterations required to bring the actual error below some epsilon ε would be a better measure. Another measure would be the number of iterations required to bring the difference between successive iterations below some epsilon ε. (The difference between successive iterations is probably a better choice if you don't have exact root values at hand for your inputs. You would use a criteria such as successive differences to know when to terminate your root finders in practice, so you could or should use them here, too.)

While you can characterize the number of iterations required for different algorithms by the ratios between them (one algorithm may take roughly ten times more iterations to reach the same precision as another), there often isn't "growth" in the iterations as inputs change.

Of course, if your algorithms take more iterations with "larger" inputs, then Big O notation makes sense.

sarnold
  • 102,305
  • 22
  • 181
  • 238
  • 2
    You **can** use Big O to count the number of iterations as epsilon->0. For instance, Newton needs typically O(1/sqrt(epsilon)) iterations to reach precision epsilon, Brent needs O(1 / epsilon^{0.618}). – Alexandre C. Dec 13 '11 at 10:39
  • @Alexandre: true enough, but both your examples have different treatments of epsilon -- my _first draft_ was actually about using _- log(epsilon)_ as the independent variable for Big O analysis, but the more I thought about it, the less I liked it. – sarnold Dec 13 '11 at 23:40
1

Big-O notation is designed to describe how an alogorithm behaves in the limit, as n goes to infinity. This is a much easier thing to work with in a theoretical study than in a practical experiment. I would pick things to study that you can easily measure that and that people care about, such as accuracy and computer resources (time/memory) consumed.

When you write and run a computer program to compare two algorithms, you are performing a scientific experiment, just like somebody who measures the speed of light, or somebody who compares the death rates of smokers and non-smokers, and many of the same factors apply.

Try and choose an example problem or problems to solve that is representative, or at least interesting to you, because your results may not generalise to sitations you have not actually tested. You may be able to increase the range of situations to which your results reply if you sample at random from a large set of possible problems and find that all your random samples behave in much the same way, or at least follow much the same trend. You can have unexpected results even when the theoretical studies show that there should be a nice n log n trend, because theoretical studies rarely account for suddenly running out of cache, or out of memory, or usually even for things like integer overflow.

Be alert for sources of error, and try to minimise them, or have them apply to the same extent to all the things you are comparing. Of course you want to use exactly the same input data for all of the algorithms you are testing. Make multiple runs of each algorithm, and check to see how variable things are - perhaps a few runs are slower because the computer was doing something else at a time. Be aware that caching may make later runs of an algorithm faster, especially if you run them immediately after each other. Which time you want depends on what you decide you are measuring. If you have a lot of I/O to do remember that modern operating systems and computer cache huge amounts of disk I/O in memory. I once ended up powering the computer off and on again after every run, as the only way I could find to be sure that the device I/O cache was flushed.

mcdowella
  • 19,301
  • 2
  • 19
  • 25
1

You can get wildly different answers for the same problem just by changing starting points. Pick an initial guess that's close to the root and Newton's method will give you a result that converges quadratically. Choose another in a different part of the problem space and the root finder will diverge wildly.

What does this say about the algorithm? Good or bad?

duffymo
  • 305,152
  • 44
  • 369
  • 561
0

I just finish a project where comparing bisection, Newton, and secant root finding methods. Since this is a practical case, I don't think you need to use Big-O notation. Big-O notation is more suitable for asymptotic view. What you can do is compare them in term of:

Speed - for example here newton is the fastest if good condition are gathered

Number of iterations - for example here bisection take the most iteration

Accuracy - How often it converge to the right root if there is more than one root, or maybe it doesn't even converge at all.

Input - What information does it need to get started. for example newton need an X0 near the root in order to converge, it also need the first derivative which is not always easy to find.

Other - rounding errors

For the sake of visualization you can store the value of each iteration in arrays and plot them. Use a function you already know the roots.

Iliass
  • 527
  • 2
  • 6
  • 13
0

Although this is a very old post, my 2 cents :)

Once you've decided which algorithmic method to use to compare them (your "evaluation protocol", so to say), then you might be interested in ways to run your challengers on actual datasets.

This tutorial explains how to do it, based on an example (comparing polynomial fitting algorithms on several datasets).

(I'm the author, feel free to provide feedback on the github page!)

smarie
  • 4,568
  • 24
  • 39
0

I would suggest you to have a look at the following Python root finding demo. It is a simple code, with some different methods and comparisons between them (in terms of the rate of convergence).

http://www.math-cs.gordon.edu/courses/mat342/python/findroot.py