0

As the questions states, I want to apply a two-way Adaptive Thresholding technique to my image. That is to say, I want to find each pixel value in the neighborhood and set it to 255 if it is less than or greater than the mean of the neighborhood minus a constant c.

Take this image, for example, as the neighborhood of pixels. The desired pixel areas to keep are the darker areas on the third and sixth squares' upper-half (from left-to-right and top-to-bottom), as well as the eight and twelve squares' upper-half.

Obviously, this all depends on the set constant value, but ideally areas that are significantly different than the mean pixel value of the neighborhood will be kept. I can worry about the tuning myself though.

enter image description here

DarkDrassher34
  • 69
  • 3
  • 15
  • 1
    Doesn't that mean, that just every pixel becomes white? Please further elaborate on your question, and provide some example image including the desired output. Also, what have you attempted so far? Please show any relevant code. – HansHirse Jan 10 '20 at 05:26
  • No. Because the constant from which the mean is substracted makes a difference. So all pixel values between `mean - c` and `mean + c` are kept. I have not tried much because the source code is a bit unintuitive to me. Mostly, I am trying to make use open cv's `cv2.adaptiveThreshold()` function. I do not think an image is necessary for this question - it is more of a coding question than a desired output one. But I will add one to explain. – DarkDrassher34 Jan 10 '20 at 05:55

2 Answers2

2

Your question and comment are contradictory: Keep everything (significantly) brighter/darker than the mean (+/- constant) of the neighbourhood (question) vs. keep everything within mean +/- constant (comment). I assume the first one to be the correct, and I'll try to give an answer.

Using cv2.adaptiveThreshold is certainly useful; parameterization might be tricky, especially given the example image. First, let's have a look at the output:

Output

We see, that the intensity value range in the given image is small. The upper-halfs of the third and sixth' squares don't really differ from their neighbourhood. It's quite unlikely to find a proper difference there. The upper-halfs of squares #8 and #12 (or also the lower-half of square #10) are more likely to be found.

Top row now shows some more "global" parameters (blocksize = 151, c = 25), bottom row more "local" parameters (blocksize = 51, c = 5). Middle column is everything darker than the neighbourhood (with respect to the paramters), right column is everything brighter than the neighbourhood. We see, in the more "global" case, we get the proper upper-halfs, but there are mostly no "significant" darker areas. Looking, at the more "local" case, we see some darker areas, but we won't find the complete upper-/lower-halfs in question. That's just because how the different triangles are arranged.

On the technical side: You need two calls of cv2.adaptiveThreshold, one using the cv2.THRESH_BINARY_INV mode to find everything darker and one using the cv2.THRESH_BINARY mode to find everything brighter. Also, you have to provide c or -c for the two different cases.

Here's the full code:

import cv2
from matplotlib import pyplot as plt
from skimage import io          # Only needed for web grabbing images

plt.figure(1, figsize=(15, 10))

img = cv2.cvtColor(io.imread('https://i.stack.imgur.com/dA1Vt.png'), cv2.COLOR_RGB2GRAY)

plt.subplot(2, 3, 1), plt.imshow(img, cmap='gray'), plt.colorbar()

# More "global" parameters
bs = 151
c = 25
img_le = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, bs, c)
img_gt = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, bs, -c)
plt.subplot(2, 3, 2), plt.imshow(img_le, cmap='gray')
plt.subplot(2, 3, 3), plt.imshow(img_gt, cmap='gray')

# More "local" parameters
bs = 51
c = 5
img_le = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, bs, c)
img_gt = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, bs, -c)
plt.subplot(2, 3, 5), plt.imshow(img_le, cmap='gray')
plt.subplot(2, 3, 6), plt.imshow(img_gt, cmap='gray')

plt.tight_layout()
plt.show()

Hope that helps – somehow!

-----------------------
System information
-----------------------
Python:      3.8.1
Matplotlib:  3.2.0rc1
OpenCV:      4.1.2
-----------------------
HansHirse
  • 18,010
  • 10
  • 38
  • 67
  • Well, you did not really answer my question, but this was helpful. Ignore the image attached, I just used it for illustration purposes. Let's say we had a neighborhood of ten pixels with the following values [34, 56, 77, 88, 36, 21, 92, 63] mean is 58.357. Let's assume `c = 18`. I want anything greater than 76.357 to be white as well as everything less than 40.357. All the other pixels should be black. Does this make sense? – DarkDrassher34 Jan 10 '20 at 07:28
  • @DarkDrassher34 That'd be something like `cv2.bitwise_or(img_gt, img_le)`, i.e. just the union of both masks. The single masks `img_gt` and `img_le` exactly reflect the wanted behaviour for both cases. – HansHirse Jan 10 '20 at 07:33
  • Excellent. So `cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, bs, 18)` will give me the pixels whose values are lower than `mean - c` in white - `cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARTY_INV, bs, -18)` will give me the pixels whose values are higher than `mean + c` in white. However, `cv2.bitwise_or()` will then return when the values are black. Is this correct? – DarkDrassher34 Jan 10 '20 at 07:41
  • No. `cv2.adaptiveThreshold(..., cv2.THRESH_BINARY, ..., -c)` masks (i.e. sets pixels to white) everything above `mean + c` (that's why `-c`, because `c` is subtracted), the rest is black. Vice versa, `cv2.adaptiveThreshold(..., cv2.THRESH_BINARY_INV, ..., c)` masks everything below `mean - c`. Finally, `cv2.bitwise_or()` is white for all pixels above `mean + c` or below `mean - c` (logical OR). – HansHirse Jan 10 '20 at 07:46
  • Awesome! Thanks. – DarkDrassher34 Jan 10 '20 at 07:55
1

Another way to look at this is that where abs(mean - image) <= c, you want that to become white, otherwise you want that to become black. In Python/OpenCV/Scipy/Numpy, I first compute the local uniform mean (average) using a uniform 51x51 pixel block averaging filter (boxcar average). You could use some weighted averaging method such as the Gaussian average, if you want. Then I compute the abs(mean - image). Then I use Numpy thresholding. Note: You could also just use one simple threshold (cv2.threshold) on the abs(mean-image) result in place of two numpy thresholds.

Input:

enter image description here

import cv2
import numpy as np
from scipy import ndimage

# read image as grayscale
# convert to floats in the range 0 to 1 so that the difference keeps negative values
img = cv2.imread('squares.png',0).astype(np.float32)/255.0

# get uniform (51x51 block) average
ave = ndimage.uniform_filter(img, size=51)

# get abs difference between ave and img and convert back to integers in the range 0 to 255
diff = 255*np.abs(ave - img)
diff = diff.astype(np.uint8)

# threshold
# Note: could also just use one simple cv2.Threshold on diff
c = 5
diff_thresh = diff.copy()
diff_thresh[ diff_thresh <= c ] = 255
diff_thresh[ diff_thresh != 255 ] = 0


# view result
cv2.imshow("img", img)
cv2.imshow("ave", ave)
cv2.imshow("diff", diff)
cv2.imshow("threshold", diff_thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()

# save result
cv2.imwrite("squares_2way_thresh.jpg", diff_thresh)


Result:

enter image description here

fmw42
  • 46,825
  • 10
  • 62
  • 80