0

I am doing some numerical analysis exercise where I need calculate solution of linear system using a specific algorithm. My answer differs from the answer of the book by some decimal places which I believe is due to rounding errors. Is there a way where I can automatically set arithmetic to round eight decimal places after each arithmetic operation? The following is my python code.

import numpy as np
A1 = [4, -1, 0, 0, -1, 4, -1, 0,\
     0, -1, 4, -1, 0, 0, -1, 4]
A1 = np.array(A1).reshape([4,4])
I = -np.identity(4)
O = np.zeros([4,4])

A = np.block([[A1, I, O, O],
             [I, A1, I, O],
             [O, I, A1, I],
             [O, O, I, A1]])

b = np.array([1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6])

def conj_solve(A, b, pre=False):
    n = len(A)
    
    C = np.identity(n)        
    
    if pre == True:
        for i in range(n):
            C[i, i] = np.sqrt(A[i, i])
            
    Ci = np.linalg.inv(C)
    Ct = np.transpose(Ci)
    
    x = np.zeros(n)
    r = b - np.matmul(A, x)
    w = np.matmul(Ci, r)
    v = np.matmul(Ct, w)
    alpha = np.dot(w, w)
    
    for i in range(MAX_ITER):
        if np.linalg.norm(v, np.infty) < TOL:            
            print(i+1, "steps")            
            print(x)
            print(r)
            return
        u = np.matmul(A, v)
        t = alpha/np.dot(v, u)
        x = x + t*v
        r = r - t*u
        w = np.matmul(Ci, r)
        beta = np.dot(w, w)
        
        if np.abs(beta) < TOL:
            if np.linalg.norm(r, np.infty) < TOL:
                print(i+1, "steps")
                print(x)
                print(r)
                return
        s = beta/alpha
        v = np.matmul(Ct, w) + s*v
        alpha = beta
    print("Max iteration exceeded")
    return x

MAX_ITER = 1000
TOL = 0.05

sol = conj_solve(A, b, pre=True)

Using this, I get 2.55516527 as first element of array which should be 2.55613420.

OR, is there a language/program where I can specify the precision of arithmetic?

S L
  • 14,262
  • 17
  • 77
  • 116
  • I get `2.55516527` – dawg Oct 25 '20 at 20:24
  • @dawg Sorry, looks like I get 2.55516527 too. That one above looks like I used different value for TOL. – S L Oct 25 '20 at 20:26
  • 1
    Make sure you understand the difference between rounding of actual values, and rounding when displaying values. – hpaulj Oct 25 '20 at 20:41
  • @hpaulj the values needs to be rounded after each operation. For example, there is matrix multiplication, scalar multiplication of vector, addition of vectors and scalars. They all need to be rounded on each operation. – S L Oct 25 '20 at 20:44
  • 1
    @SantoshLinkha Rounding after EVERY operation will skew your data a lot. – Fallenreaper Oct 25 '20 at 20:49
  • @Fallenreaper Yup, machine precision arithmetic :( – S L Oct 25 '20 at 20:52
  • the `round` function, natively allows you to set the number of decimal places. – Fallenreaper Oct 25 '20 at 20:55
  • @Fallenreaper Yup, but too many operations. I have lots of other algorithms to implement as well. I am looking for permanent solution by setting float precision for the machine. I think this might work http://mpmath.org/ – S L Oct 25 '20 at 20:59
  • Also, I have to calculate the inverse of matrix. I need to write my own algorithm in which I have to round after every operation. – S L Oct 25 '20 at 21:00
  • 1
    Rounding to eight decimals after every operation is a sure way to *worsen* accuracy ! –  Oct 26 '20 at 20:09
  • Have you considered that maybe the textbook is wrong? – dawg Oct 27 '20 at 14:14
  • @dawg mostlikey because the answers for two problems are same https://imgur.com/xHjDBJD – S L Oct 29 '20 at 09:22

1 Answers1

3

Precision/rounding during the calculation is unlikely to be the issue.

To test this I ran the calculation with precisions that bracket the precision you are aiming for: once with np.float64, and once with np.float32. Here is a table of the printed results, their approximate decimal precision, and the result of the calculation (ie, the first printed array value).

numpy type       decimal places         result
-------------------------------------------------
np.float64               15              2.55516527
np.float32                6              2.5551653

Given that these are so much in agreement, I doubt an intermediate precision of 8 decimal places is going to give an answer that's not between these two results (ie, 2.55613420 that's off in the 4th digit).


This isn't part isn't part of my answer, but is a comment on using mpmath. The questioner suggested it in the comments, and it was my first thought too, so I ran a quick test to see if it behaved how I expected with low precision calculations. It didn't, so I abandoned it (but I'm not an expert with it).

Here's my test function, basically multiplying 1/N by N and 1/N repeatedly to emphasise the error in 1/N.

def precision_test(dps=100, N=19, t=mpmath.mpf):
    with mpmath.workdps(dps):  
        x = t(1)/t(N)
        print(x)
        y = x
        for i in range(10000):
            y *= x
            y *= N
        print(y)

This works as expected with, eg, np.float32:

precision_test(dps=2, N=3, t=np.float32)
# 0.33333334
# 0.3334327041164994

Note that the error has propagated into more significant digits, as expected.

But with mpmath, I could never get that to happen (testing with a range of dps and a various prime N values):

precision_test(dps=2, N=3)
# 0.33
# 0.33

Because of this test, I decided mpmath is not going to give normal results for low precision calculations.

TL;DR:
mpmath didn't behave how I expected at low precision so I abandoned it.

tom10
  • 67,082
  • 10
  • 127
  • 137
  • I am out of my wits on what to do. I also thought of working with single precision however it didn't work. Here is link to the answer. https://imgur.com/yAdVzQ3 – S L Oct 26 '20 at 21:28
  • Are you sure this is a thing to worry about? Books have typos often, and small errors occur all the time, and you're also within the "residual" and "tolerance". For example, some slight error in the initial condition, or misstatement of the stop condition would make this type of error very easy to occur. – tom10 Oct 26 '20 at 21:39
  • I am pretty much sure that I implemented the algorithm correctly. https://imgur.com/xHjDBJD I suppose I have to give up. Also we all start with x=0 initially. – S L Oct 26 '20 at 22:02
  • @SantoshLinkha: I've looked at this a bit and the one thing I don't quite get is the definition of D and $C^{-1}$ in the problem statement. It seems that the exact definition of $C^{-1}$ is the type of thing that could give the small variances you see, which is also interesting because this is the thing that's least clear to me. I'd suggest looking at that (ie, your calculation of Ci) carefully, and maybe try a few variants or other ways to interpret the terms, if you know of reasonable ones. – tom10 Oct 27 '20 at 00:44
  • C = sqrt of diagonal elements. – S L Oct 29 '20 at 09:21