1

i tried to get this work by using Newton's method as described here: wiki using the following code, but the problem is that it is giving accurate result only upto 16 decimal places. I tried to increase the number of iterations still the result is the same. I started with initial guess of 1. So how can i improve the accuracy of answer(upto 100 or more decimal places) ? Thanks. Code:

double x0,x1;
#define n 2
double f(double x0)
{
    return ((x0*x0)-n);
}
double firstDerv(double x0)
{
    return 2.0*x0;
}
int main()
{
    x0 = n/2.0;
    int i;
    for(i=0;i<40000;i++)
    {
        x1=x0-(f(x0)/((firstDerv(x0))));
        x0=x1;
    }
    printf("%.100lf\n",x1);
    return 0;
}
pranay
  • 2,339
  • 9
  • 35
  • 58
  • Have you searched on what is the accuracy of the `double` datatype you are using? Read this: http://www.cplusplus.com/doc/tutorial/variables/ – ypercubeᵀᴹ Dec 14 '11 at 13:40
  • With an IEEE standard `double`, you can't get such a high precision. You'll need a bigger floating-point type. – Fred Foo Dec 14 '11 at 13:40
  • http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic#Libraries has a list of arbitrary value libraries. Check out one of those to get more than 16 digits of accuracy. – tafoo85 Dec 14 '11 at 13:43

4 Answers4

4

To get around the problem of limited precision floating points, you can also use Newton's method to find in each iteration a rational (a/b, with a and b integers) that is a better approximation of sqr(2).

If x=a/b is the value returned from you last iteration, then Newton's method states that the new estimate y=c/d is:

y = x/2 + 1/x = a/2b + b/a = (a^2+2b^2)(2ab)

so:

c= a^2 + 2b^2

d= 2ab

Precision doubles each iteration. You are still limited in the precision you can reach, because nominator and denominator quickly increase, but perhaps finding an implementation of large integers (or concocting one yourself) is easier than finding an implementation of arbitrary precision floating points. Also, if you are really interested in decimals, then this answer won't help you. It does give you a very precise estimate of sqr(2).

Just some iterates of a/b for the algorithm:

1/1 , 3/2 , 17/12 , 577/408 , 665857/470832.

665857/470832 approximates sqr(2) with an error of 1.59e-12. Error will remain to be of the order 1/a^2, so implementing a and b as longs will give you precision of 1e-37 -ish.

willem
  • 2,617
  • 5
  • 26
  • 38
  • +1 for suggesting the rational arithmetic approach. I can imagine writing a long division routine that spits out the decimal digits one at a time, if one wants to avoid an arbitrary precision floating point library. Of course at some point even 64 bit integers will prove inadequate for the purpose; those would give the roughly 1e-37 precision (as a rational pair) you suggest, and the OP wants to go well beyond that. So perhaps [using GMP](http://gmplib.org/) solely for its multiprecision integers is a happy medium. – hardmath Dec 14 '11 at 18:55
2

The floating point numbers on current machines are IEEE754 and have a limited precision (of about 15 digits).

If you want much more precision, you'll need bignums which are (slowly) provided by software libraries like GMP

You could also code your program in languages and implementations using bignums.

Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
2

You simply can't do it with that approach; doubles don't have enough bits to get 100 places of precision. Consider using a library for arbitrary-precision such as GMP.

Michael J. Barber
  • 24,518
  • 9
  • 68
  • 88
0

maybe this is because floating point numbers are approximated by computers through an m*10^e form. since m and e consist of a finite number of digits you cannot approximate all numbers with absolute accuracy.

think of 1/3 which is 0.333333333333333...

zuloo
  • 1,310
  • 1
  • 8
  • 12