4

I have tried many algorithms for finding π using Monte Carlo. One of the solutions (in Python) is this:

def calc_PI():
    n_points = 1000000
    hits = 0

    for i in range(1, n_points):
        x, y = uniform(0.0, 1.0), uniform(0.0, 1.0)

        if (x**2 + y**2) <= 1.0:
            hits += 1

    print "Calc2: PI result", 4.0 * float(hits) / n_points

The sad part is that even with 1000000000 the precision is VERY bad (3.141...).

Is this the maximum precision this method can offer? The reason I choose Monte Carlo was that it's very easy to break it in parallel parts. Is there another algorithm for π that is easy to break into pieces and calculate?

dmckee --- ex-moderator kitten
  • 98,632
  • 24
  • 142
  • 234
Jon Romero
  • 4,062
  • 6
  • 36
  • 34

3 Answers3

14

This is a classic example of Monte Carlo. But if you're trying to break the calculation of pi into parallel parts, why not just use an infinite series and let each core take a range, then sum the results as you go?

http://mathworld.wolfram.com/PiFormulas.html

Nosredna
  • 83,000
  • 15
  • 95
  • 122
  • That was my first approach. But I though about playing a little bit with Monte Carlo because it can be used in many fields. – Jon Romero Jun 11 '09 at 17:28
  • 19
    Use Monte Carlo when it's hard to find a formula. Use the formula when it's easy to find the formula. – Nosredna Jun 11 '09 at 17:32
8

Your fractional error goes by sqrt(N)/N = 1/sqrt(N), So this is a very inefficient way to get a precise estimate. This limit is set by the statistical nature of the measurement and can't be beaten.

You should be able to get about floor(log_10(N))/2-1 digits of good precision for N throws. Maybe -2 just to be safe...

Even at that it assumes that you are using a real RNG or a good enough PRNG.

dmckee --- ex-moderator kitten
  • 98,632
  • 24
  • 142
  • 234
4

Use a quasi random number generator (http://www.nag.co.uk/IndustryArticles/introduction_to_quasi_random_numbers.pdf) instead of a standard pseudo RNG. Quasi random numbers cover the integration area (what you're doing is a MC integration) more evenly than pseudo random numbers, giving better convergence.

quant_dev
  • 6,181
  • 1
  • 34
  • 57
  • My naive guess is that while this *will* converge faster, it may be harder to estimate the confidence bounds. Do you know of any literature on the matter? – dmckee --- ex-moderator kitten Jun 12 '09 at 15:01
  • No, but there is a C library http://www.feynarts.de/cuba/ which has implemented MC integration including the confidence bounds (it returns an absolute error estimate and Chi^2 probability that the estimate is wrong). You can download the code and look through the implementation, or email the author asking for literature he used to write the code. – quant_dev Jun 13 '09 at 10:17
  • 1
    Ah! The author links to a paper on the matter. Published in Computational Physics Communications (and on the arXiv as http://arxiv.org/abs/hep-ph/0404043). The distressing part to me is that he's in the same line of work as I am and this is the first ;ve heard of this. D'oh! – dmckee --- ex-moderator kitten Jun 13 '09 at 14:58
  • My wife is using this library extensively in her research. In her experience, for low dimensions, the MC integration is outperformed by deterministic methods (routine "Cuhre" in Cuba library), even for "difficult" integrands with discontinuities. – quant_dev Jun 13 '09 at 15:39