0

I need to measure the speed a conveyor belt under a surveillance camera. After years of wearing the belt is basically texture-less, it's even difficult to see whether the belt is moving if nothing is on top it.

I'm trying to solve this problem as an object tracking problem:

  1. Find some keypoints/objects on the belt.
  2. Track those keypoints/objects with OpenCV's median flow tracker.
  3. Inverse perspective transform and get the speed in 3D space.

If the keypoints/objects in step 1 are given manually, step 2 & 3 work very well, but I have performance issues finding keypoints automatically: keypoint detection costs 60ms+ even if I crop the images into very small ones. I tried SURF & ORB implemented in OpenCV, neither one is fast enough.

Are there any other faster options ?

user416983
  • 974
  • 3
  • 18
  • 28
  • emm usually fast only takes a few mini sec to run. Take 480p image for example. onnly takes about 5 ms to run it. – Dr Yuan Shenghai Feb 08 '21 at 11:52
  • median flow tracker. is slow process. try KCF or LKT tracker? – Dr Yuan Shenghai Feb 08 '21 at 11:54
  • @DrYuanShenghai I have tested all trackers implemented in OpenCV, median flow is the fastest and the most reliable one in my situation. The movement of the belt is very smooth and absolutly no occlusion. It's very different from other feature tracking problems. – user416983 Feb 09 '21 at 01:14
  • Ok. can post some video or image for further analysis? If I understand correctly, you need to track the inter-frame feature movement. not tracking object, right? I got a feeling that you might created a box for tracking for each single feature which kills your pc. In VSLAM, it is never done this way. ORBSLAM is relative slow compare to SVO. maybe SVO is what you are looking for? – Dr Yuan Shenghai Feb 10 '21 at 00:55

1 Answers1

0

Maybe you can try the FAST algorithm for corner detection. It's faster than the options you have tried. It's implemented in opencv. Here's the sample code extracted directly from the opencv documentation (https://docs.opencv.org/master/df/d0c/tutorial_py_fast.html):

import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt

img = cv.imread('simple.jpg',0)

# Initiate FAST object with default values
fast = cv.FastFeatureDetector_create()

# find and draw the keypoints
kp = fast.detect(img,None)
img2 = cv.drawKeypoints(img, kp, None, color=(255,0,0))

# Print all default params
print( "Threshold: {}".format(fast.getThreshold()) )
print( "nonmaxSuppression:{}".format(fast.getNonmaxSuppression()) )
print( "neighborhood: {}".format(fast.getType()) )
print( "Total Keypoints with nonmaxSuppression: {}".format(len(kp)) )
cv.imwrite('fast_true.png',img2)

# Disable nonmaxSuppression
fast.setNonmaxSuppression(0)

kp = fast.detect(img,None)

print( "Total Keypoints without nonmaxSuppression: {}".format(len(kp)) )
img3 = cv.drawKeypoints(img, kp, None, color=(255,0,0))
cv.imwrite('fast_false.png',img3)
Tides
  • 111
  • 11