EDIT: Problem solved in comments, thank you for your help.
EDIT 2: I think that the linked solution is mathematically not correct, but maybe I am wrong. The problem is that you divide by the window size but you do not add the quadratic difference together.
Solution: My solution now is to calculate the precision by np.finfo(np.float).precision
and set every value to zero, that is below pow(1, np.finfo(np.float).precision)]
because that should be the upper bound for the machine epsilon error. If you have negative values in your matrix, you have to multiply them by -1 before you check if the value is below your error threshold. Otherwise values like -2 would be set to 0 as well.
I have an 2D array with intensity values derived from an image (52x111). To calculate the local standard deviation I used the answer from another thread (improving code efficiency: standard deviation on sliding windows)
The smallest intensity value is 0.0, as parameters for the filter function I used mode = "constant"
and cval = 0
.
My mean filter looks like this:
mean = uniform_filter(img_data, size=3, cval= 0, mode="constant", origin=0)
The result of np.amin(mean)
is -2.47949808833e-15
.
How is it possible, that the filter yield to negative values?
To clarify things, the whole relevant code looks like this:
mean = uniform_filter(img_data, size=3, cval= 0, mode="constant", origin=0)
mean_of_squared = uniform_filter(img_data**2, 3, cval= 0, mode='constant', origin=0)
squared_mean = mean * mean
stdev = (mean_of_squared - squared_mean) ** 0.5
As alternative I use the concolve function from scipy
window_size = 3
mean_filter = np.ones((window_size, window_size)) / 9
mean = convolve(img_data, mean_filter)
which works fine, but is much slower than the uniform_filter
approach.
As far as I remember all of the coordinates that I inspected manually are just windows of 0.0, but I am not sure if this is the general case. Any ideas why uniform_filter
is acting like it does?