I have a list of lists of floats, like this:
u = [[1.2, 1.534, 23.5, ...], [0.2, 11.5, 3.3223, ...], ...]
Using Python to calculate a new list (height and width are the lists dimensions, u2 is a list of lists of floats set to 0.0):
for time in xrange(start, stop):
for i in xrange(1,height-1):
for j in xrange(1, width-1):
u2[i][j] = u[i][j-1] + u[i-1][j] - time * (u[i][j+1] / u[i+1][j])
u = deepcopy(u2)
As expected, this produces a new list of lists of floats.
However, transferring this to Numpy, with a simple:
un = array(u)
Then using the same kind of loop (u2 being an array of zeroes this time):
for time in xrange(start, stop):
for i in xrange(1,height-1):
for j in xrange(1, width-1):
u2[i][j] = un[i][j-1] + un[i-1][j] - time * (un[i][j+1] / un[i+1][j])
un = u2
... produces equal results as the Python implementation as long as height, width and the timerange are all small, but differing results as these variables are set higher and higher.
- Is there a way to prevent this build-up of float-inaccuracy?
(This is not real code, just me fiddling around to understand how numbers are treated in Python and Numpy, so any suggestions regarding vectorization or other Numpy-efficiency stuff is off-topic)