3

So I have been trying to make a motion tracker to track a dog moving in a video (recorded Top-down) retrieve a cropped video showing the dog and ignore the rest of the background.

I tried first with object tracking using the available algorithms in opencv 3 (BOOSTING, MIL, KCF, TLD, MEDIANFLOW, GOTURN(returns an error, couldn't solve it yet)) from this link and I even tried a basic algorithm for motion tracking by subtracting the first frame, but none of them gives a good result. Link

I would prefer a code with a preset rectangle box that surrounds the area of motion once it is detected. Something like in this video

I'm not very familiar with OPENCV, but I believe single motion tracking is not supposed to be an issue since a lot of work has been done already. Should I consider other libraries/APIs or is there a better code/tutorial I can follow to get this done? my point is to use this later with neural network (which is why I'm trying to solve it using python/opencv)

Thanks for any help/advice

Edit:

I removed the previous code to make the post cleaner.

Also, based on the feedback I got and further research, I was able to modify some code to make it close to my wanted result. However, I still have an annoying problem with the tracking. It seems like the first frame affects the rest of the tracking since even after the dog moves, it keeps detecting its first location. I tried to limit the tracking to only 1 action using a flag, but the detection gets messed up. This is the code and pictures showing results:

jimport imutils
import time
import cv2

previousFrame = None

def searchForMovement(cnts, frame, min_area):

    text = "Undetected"

    flag = 0

    for c in cnts:
        # if the contour is too small, ignore it
        if cv2.contourArea(c) < min_area:
            continue

        #Use the flag to prevent the detection of other motions in the video
        if flag == 0:
            (x, y, w, h) = cv2.boundingRect(c)

            #print("x y w h")
            #print(x,y,w,h) 
            cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
            text = "Detected"
            flag = 1

    return frame, text

def trackMotion(ret,frame, gaussian_kernel, sensitivity_value, min_area):


    if ret:

        # Convert to grayscale and blur it for better frame difference
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        gray = cv2.GaussianBlur(gray, (gaussian_kernel, gaussian_kernel), 0)



        global previousFrame

        if previousFrame is None:
            previousFrame = gray
            return frame, "Uninitialized", frame, frame



        frameDiff = cv2.absdiff(previousFrame, gray)
        thresh = cv2.threshold(frameDiff, sensitivity_value, 255, cv2.THRESH_BINARY)[1]

        thresh = cv2.dilate(thresh, None, iterations=2)
        _, cnts, _ = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

        frame, text = searchForMovement(cnts, frame, min_area)
        #previousFrame = gray

    return frame, text, thresh, frameDiff




if __name__ == '__main__':

    video = "Track.avi"
    video0 = "Track.mp4"
    video1= "Ntest1.avi"
    video2= "Ntest2.avi"

    camera = cv2.VideoCapture(video1)
    time.sleep(0.25)
    min_area = 5000 #int(sys.argv[1])

    cv2.namedWindow("Security Camera Feed")


    while camera.isOpened():

        gaussian_kernel = 27
        sensitivity_value = 5
        min_area = 2500

        ret, frame = camera.read()

        #Check if the next camera read is not null
        if ret:
            frame, text, thresh, frameDiff = trackMotion(ret,frame, gaussian_kernel, sensitivity_value, min_area)

        else:
            print("Video Finished")
            break


        cv2.namedWindow('Thresh',cv2.WINDOW_NORMAL)
        cv2.namedWindow('Frame Difference',cv2.WINDOW_NORMAL)
        cv2.namedWindow('Security Camera Feed',cv2.WINDOW_NORMAL)

        cv2.resizeWindow('Thresh', 800,600)
        cv2.resizeWindow('Frame Difference', 800,600)
        cv2.resizeWindow('Security Camera Feed', 800,600)
      # uncomment to see the tresh and framedifference displays                  
        cv2.imshow("Thresh", thresh)
        cv2.imshow("Frame Difference", frameDiff)



        cv2.putText(frame, text, (10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
        cv2.imshow("Security Camera Feed", frame)

        key = cv2.waitKey(3) & 0xFF
        if key == 27 or key == ord('q'):
            print("Bye")
            break

    camera.release()
cv2.destroyAllWindows()

This picture shows how the very first frame is still affecting the frame difference results, which forces the box to cover area with no motion.

Result showing the frame difference and video display

This one shows a case when motion is ignored an no-longer existing motion (frame difference from the second and first frames of the video) being falsely detected. When I allow multiple tracking it tracks both, which is still wrong since it detects an empty area.

enter image description here

Does anyone have an idea where the code is wrong or lacking ? I keep trying but cannot get it to work properly.

Thank you in advance !!

Wazaki
  • 899
  • 1
  • 8
  • 22
  • Do not just put the link, where is your tried code? – Kinght 金 Jan 04 '18 at 05:50
  • @Silencer I added that in the edit. Thanks for the comment – Wazaki Jan 04 '18 at 06:50
  • I think you should first identify the problem correctly and then try solutions. You want to first detect motion... and maybe track this object? or maybe only detect motion on each step? The first algorithms you mention are for tracking only, not for detection, that is why you need the ROI (this is your "object" to track). Also, what happens if you more than 1 object moving? I would recommend to first try to detect motion correctly, you can try something like [this](http://www.steinm.com/blog/motion-detection-webcam-python-opencv-differential-images/) – api55 Jan 04 '18 at 07:59
  • @api55 Thank you for your comment. I am trying to follow the lead of your recommendation and once I get some results I will edit and mention it. Concerning your questions, it's as you said, detecting the motion and tracking that object. In my scenario, there is a dog inside a room and I want to track it (with a boundary box). So basically, dog moves -> motion is detected -> one boundary box is created and keeps tracking it (ignoring any other motion in the video). – Wazaki Jan 06 '18 at 08:41
  • So then you have 2 task, try doing correctly the first one and the second one you may try to use the algorithms for tracking with the known region – api55 Jan 06 '18 at 11:25
  • @api55 I tried to work on some motion tracking code which appeared to be promising, however I still run into a problem. I edited my question to include more specific details of the problem. I will be glad if you could take a look and give me your opinion. Thanks a lot !! – Wazaki Jan 07 '18 at 10:51
  • I usually used [this algorithm](https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=9&ved=0ahUKEwiE__bt-cXYAhXIDywKHTOrALIQFghaMAg&url=http%3A%2F%2Fwww.springer.com%2Fcda%2Fcontent%2Fdocument%2Fcda_downloaddocument%2F9781447142157-c2.pdf%3FSGWID%3D0-0-45-1338817-p174508313&usg=AOvVaw2YZJbBxALaTtXc64212mNv) when i required motion tracking... it is usually better against noise, and also avoids ghosting effect (the effect from the first frame) you can also separate motion detections using contours over the binary image – api55 Jan 07 '18 at 13:28
  • @api55 So there is no way we can prevent that in the code I shared? Thank you for the pdf, I went through it and as you said, it is a ghosting effect. Do you happen to have any implementation you can share by any chance ? Thanks for all your help !! I'm learning more thanks to your comments – Wazaki Jan 07 '18 at 14:21
  • DId you finally done the work?? I'm working in something similar, but not with static background... just for share the code if you can.. thank a lot. @OsumanAAA – Lewis Oct 29 '18 at 15:58
  • 1
    @Lewis I didn't really get satisfying results with this kind of method and if your background is not static it will be even more complicated. I ended up using YOLO for object detection to perform tracking. – Wazaki Oct 30 '18 at 01:27

1 Answers1

1

To include motion detection I have created generic components on NPM Registry and docker hub This detects the motion on client web cam( React app) and uses server in Python based on open CV so Client just captures web cam images and server analyses these images using OPENCV to determine if there is a motion or not client can specify a call back function which server calls each time there is a motion Server is just a docker image which you can pull and run and specify its URL to client

NPM Registry(Client)

Registry Link:

https://www.npmjs.com/settings/kunalpimparkhede/packages

Command

npm install motion-detector-client

Docker Image (Server)

Link

https://hub.docker.com/r/kunalpimparkhede/motiondetectorwebcam

Command

docker pull kunalpimparkhede/motiondetectorwebcam

You just need to write following code to have motion detection

Usage:

import MotionDetectingClient from './MotionDetectingClient';

<MotionDetectingClient server="http://0.0.0.0:8080" callback={handleMovement}/>

function handleMovement(pixels) 
{
console.log("Movement By Pixel="+pixels)
}

On server side : just start the docker server on port 8080:

docker run --name motion-detector-server-app -P 8080:5000 motion-detector-server-app