5

I am trying to find angles of a stockpile (on the left and right sides) by using Otsu threshold to segment the image. The image I have is like this:

Original photo

In the code, I segment it and find the first black pixel in the image

Segmented photo

The segmented photo doesn't seem to have any black pixels in the white background, but then it detects some black pixels even though I have used the morphology.opening

enter image description here

If I use it with different image, it doesn't seem to have this problem

enter image description here

How do I fix this problem? Any ideas? (The next step would be to find the angle on the left and right hand side)

The code is attached here


from skimage import io, filters, morphology, measure
import numpy as np
import cv2 as cv
from scipy import ndimage
import math
# Load the image
image = io.imread('mountain3.jpg', as_gray=True)

# Apply Otsu's thresholding to segment the image
segmented_image = image > filters.threshold_otsu(image)

# Perform morphological closing to fill small gaps
structuring_element = morphology.square(1)
closed_image = morphology.closing(segmented_image, structuring_element)

# Apply morphological opening to remove small black regions in the white background
structuring_element = morphology.disk(10)  # Adjust the disk size as needed
opened_image = morphology.opening(closed_image, structuring_element)

# Fill larger gaps using binary_fill_holes
#filled_image = measure.label(opened_image)
#filled_image = filled_image > 0

# Display the segmented image after filling the gaps
io.imshow(opened_image)

io.show()
# Find the first row containing black pixels
first_black_row = None
for row in range(opened_image.shape[0]):
    if np.any(opened_image[row, :] == False):
        first_black_row = row
        break

if first_black_row is not None:
    edge_points = []  # List to store the edge points

    # Iterate over the rows below the first black row
    for row in range(first_black_row, opened_image.shape[0]):
        black_pixel_indices = np.where(opened_image[row, :] == False)[0]

        if len(black_pixel_indices) > 0:
            # Store the first black pixel coordinates on the left and right sides
            left_x = black_pixel_indices[0]
            right_x = black_pixel_indices[-1]
            y = row

            # Append the edge point coordinates
            edge_points.append((left_x, y))
            edge_points.append((right_x, y))

    if len(edge_points) > 0:
        # Plotting the edge points
        import matplotlib.pyplot as plt

        edge_points = np.array(edge_points)

        plt.figure()
        plt.imshow(opened_image, cmap='gray')
        plt.scatter(edge_points[:, 0], edge_points[:, 1], color='red', s=1)
        plt.title('Edge Points')
        plt.show()
    else:
        print("No edge points found.")
else:
    print("No black pixels found in the image.")
Tan Phan
  • 99
  • 5
  • 2
    Please do not ask essentially the same question twice. It wastes people's time and we are all only volunteers. https://stackoverflow.com/q/76285525/2836621 – Mark Setchell May 23 '23 at 17:05
  • @MarkSetchell I think the asker might have been attempting to divide it into subproblems, this question is less general and more focused than the first one asked. – Brock Brown May 23 '23 at 17:39

4 Answers4

4

Your issue is that the image has noise. You need to deal with the noise.

That is usually done with some kind of lowpassing, i.e. blurring. I'd recommend a median blur.

Here's the result of a median filter, kernel size 9:

median, kernel size 9

And the per-pixel absolute differences to the source, magnified in amplitude by 20x:

enter image description here

(this suggests that you could do a bandpass to catch the "texture" of the pile vs the flatness of the background)

And here's the picture after Otsu thresholding (and inversion):

otsu

And your foreground barely contrasts against the background. If you had better contrasting background, this wouldn't be nearly that much of an issue.

Here's thresholding based on hue, because background and foreground slightly differ in hue:

otsu on hue

With morphological closing:

morphclose


To get your lines for the left and right slope of the stockpile, you need something that deals with contours or edge pixels anyway.

Both contour finding and connected components labeling will have trouble with this, which is why the answers recommending those also must recommend explicitly filtering the results to remove small debris (the noise).

Hence, those approaches (contours/CCs) don't solve the problem of noise, they just transform it into a different problem in which you still have to deal with the noise (by filtering it), just after processing the image.

I'd recommend dealing with the noise early.

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
  • *"Both contour finding nor connected components labeling will have trouble with this, which is why the answers recommending those also must recommend explicitly filtering the results to remove small debris (the noise)."* My answer recommends grabbing the *largest* contour. This negates the effect of the debris as far as I'm aware. – Brock Brown May 23 '23 at 17:18
  • 1
    true, but that also assumes the result to have one big contour, not a bunch of smaller ones. depending on how bad the picture is (and it is quite marginal already), the pile might just turn out to be a bunch of smaller blobs, not one big connected slice of swiss cheese. – Christoph Rackwitz May 23 '23 at 17:27
  • Hmm fair, now that I think about it, I think I had that thought at some point or another and threw it to the side, would it be okay if I used your answer to revise my answer so that they have a start to finish example? If not I completely understand. – Brock Brown May 23 '23 at 17:31
  • go ahead. I'm curious. – Christoph Rackwitz May 23 '23 at 17:41
2

Edit: This answer as of now is incomplete. It assumes that there's no chance of the rock pile splitting into swiss cheese, and as a result the contour may not accurately grab the foreground. See Christoph Rackwitz's answer for the method of differing the foreground from the background in a more reliable way, my answer explains how to calculate the angles involved.

Contours are what you're looking for. You'll want to grab the largest contour in the image, it will ignore the little pixels in the background. Then you'll want to take that contour, find the highest point on the contour, then find the highest parts of the contour that exist on both the far left and far right sides of the image, and that's how you draw your triangle. From there you can determine the angles.

import cv2
import numpy as np

# Load images
img = cv2.imread('image.jpg')

# Convert image to grayscale for easier thresholding
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Threshold the image
_, thresh = cv2.threshold(gray, 110, 255, cv2.THRESH_BINARY_INV)

# Find contours
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# Find the largest chunk of black pixels
max_contour = max(contours, key=cv2.contourArea)

# Draw contour on a copy of the image for visualization
img_contours = img.copy()
cv2.drawContours(img_contours, [max_contour], -1, (255, 0, 0), 3)
cv2.waitKey(0)

cv2.imwrite("output.jpg", img_contours)

area = cv2.contourArea(max_contour)

# For highest Y value (max_contour)
max_val_y = min(max_contour, key=lambda coord: coord[0][1])

# For the highest Y values at the leftmost and rightmost sides of the contour
sorted_contour = sorted(max_contour, key=lambda coord: coord[0][0])  # Sort by X values

# Find the Y value of the highest point in the leftmost side
left_sub_contour = [point for point in sorted_contour if point[0][0] == sorted_contour[0][0][0]]
leftmost_val_y = min(left_sub_contour, key=lambda coord: coord[0][1])[0][1]

# Find the Y value of the highest point in the rightmost side
right_sub_contour = [point for point in sorted_contour if point[0][0] == sorted_contour[-1][0][0]]
rightmost_val_y = min(right_sub_contour, key=lambda coord: coord[0][1])[0][1]

# Get the x, y coordinates for the highest point of the max_contour
highest_point = tuple(max_val_y[0])

# Get the x, y coordinates for the highest point at the leftmost side
leftmost_point = (sorted_contour[0][0][0], leftmost_val_y)

# Get the x, y coordinates for the highest point at the rightmost side
rightmost_point = (sorted_contour[-1][0][0], rightmost_val_y)

# Draw the triangle
triangle_contour = np.array([highest_point, leftmost_point, rightmost_point])
cv2.drawContours(img, [triangle_contour], 0, (0, 255, 0), 2)  # Change (0, 255, 0) to the color you want for the triangle

# Save the image
cv2.imwrite("output_triangle.jpg", img)

# Calculate distances between points (sides of triangle)
a = np.linalg.norm(np.array(highest_point)-np.array(rightmost_point))
b = np.linalg.norm(np.array(highest_point)-np.array(leftmost_point))
c = np.linalg.norm(np.array(rightmost_point)-np.array(leftmost_point))

# Calculate angle using the law of cosines
cos_angle = (a**2 + b**2 - c**2) / (2*a*b)
angle_rad = np.arccos(cos_angle)  # Angle in radians
angle_deg = np.degrees(angle_rad)  # Convert to degrees

# Calculate angle at the left side of the triangle
cos_left_angle = (b**2 + c**2 - a**2) / (2*b*c)
left_angle_rad = np.arccos(cos_left_angle)  # Angle in radians
left_angle_deg = np.degrees(left_angle_rad)  # Convert to degrees

# Calculate angle at the right side of the triangle
right_angle_deg = 180 - (left_angle_deg + angle_deg)

print(f"Angle at the left side of the triangle: {left_angle_deg} degrees")
print(f"Angle at the right side of the triangle: {right_angle_deg} degrees")


print(f"Angle at the highest point of the triangle: {angle_deg} degrees")
print("Area of the rock pile:", area)

Input:

original rock pile

Output:

output rock image triangle over rock pile

Angle at the left side of the triangle: 23.59662632252385 degrees
Angle at the right side of the triangle: 19.929564001732047 degrees
Angle at the highest point of the triangle: 136.4738096757441 degrees
Area of the rock pile: 1868530.0

I read a book recently called Designing Autonomous AI where they mention making PID controllers that manage the size of a rock pile. Out of curiosity, are you making one of those?

Also related: How to overlay segmentation result with another image?

Brock Brown
  • 956
  • 5
  • 11
0

First, you can use Connected Components (connectedComponentsWithStats) to remove any blob with an area smaller than a threshold (removes those black points).

After that, you can divide your image into 2 (split vertically in the middle). And do np.polyfit (with Degree 1) for each of those images (use the red points).

With the slope, you can find the angle.

0

If you already have a 1D y-value-sequence (y value for each column), and only few value are noise which you want to remove them...

Consider checking difference between original and blurred sequence.
You'll be able to remove only data has large difference.


Example:

This is the original y-value-sequence (shown as red points).
This looks like the same as your situation.
enter image description here

And "blurred sequence" is shown as green below.
(I simply used OpenCV's blur() function with kernel size 15)
enter image description here

This is the result. Removed points are colored as cyan. enter image description here

fana
  • 1,370
  • 2
  • 7
  • (This doesn't answer the question, but) if you find the slope with better way than the simple least squares method, you don't have to worry about removing such small amount of noise perfectly. – fana May 24 '23 at 06:10
  • I added example figures. I guess, this is one of most straightforward(easy to add to OP's situation as a post-process) way. – fana May 25 '23 at 02:34