I am trying to separate a cow from a depth image.
When I use contours it separates most of it but fails to separate the fence when the cow is leaning on it. (Note: it is ok that the head is being removed from the cow, the application I am using it on works better if the head is removed)
Here is the code I use to detect and remove contours. My idea is to remove them by size. It works when the cow is not touching the fence but in this case, doesn't work.
# Remove stuructures connected to the image border------------------------------------------------
# find contours in the image and initialize the mask that will be
cnts = cv2.findContours(BW3.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
mask = np.ones( BW3.shape[:2], dtype="uint8") * 255
# loop over the contours
for c in cnts:
if cv2.contourArea(c) > 250000:
# if the contour is bad, draw it on the mask
cv2.drawContours(mask, [c], -1, 0, -1)
BW3 = cv2.bitwise_and( BW3, BW3, mask=mask)
if cv2.contourArea(c) < 10000:
cv2.drawContours(mask, [c], -1, 0, -1)
BW3 = cv2.bitwise_and( BW3, BW3, mask=mask)
cv2.imshow('H_Black and white', BW3)
cv2.waitKey()
Is there any way to remove the fencing around the cow when it is touching? I have tried using HoughLinesP()
with no luck, I am new to OpenCV so I could be going about it the wrong way. Another potential solution would be to crop the image, but the camera is in a slightly different location each time so cropping would have to be adjusted for each camera variation. Any advice would be appreciated.
Thank you
EDIT: The purpose of separating the cow from the background is to use volumetric estimation to determine the weight of the animal. If effectively implemented this will be a cheaper solution than a standard scale. This is for a research project (The project will be open-sourced, not monetized).
The original input depth image is cropped before any other code is run (All images reflect this except for the first depth image in this post) To get the contours I change the depth picture to HSV. Then take the Hue image and change that to black and white before running cv2.findCountours
Here is a reconstruction of the depth values from the jet
colormap (inverted values so it's easier to read visually):