I am using SIFT for feature detection and calcOpticalFlowPyrLK for feature tracking in images. I am working on low resolution images (590x375 after cropping) taken from Microsoft kinect.
// feature detection
cv::Ptr<Feature2D> detector = cv::xfeatures2d::SIFT::create();
detector->detect(img_1,keypoints_1);
KeyPoint::convert(keypoints_1, points1, vector<int>());
// feature tracking
vector<float> err;
Size winSize=Size(21,21);
TermCriteria termcrit=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01);
calcOpticalFlowPyrLK(img_1, img_2, points1, points2, status, err, winSize, 1, termcrit, 0, 0.001);
I ran this on consective images of steady scene (just to get idea) taken from same camera position at rate of 30fps. To eyes, images looks same but somehow the calcOpticalFlowPyrLK in not able to track same features from one image to another. Position (x,y coordinates) should be same in detected feature and tracked feature. Somehow it isn't.
As per AldurDisciple suggestion, I think I am detecting noise as features. The black images below are difference between consuctive elements, shows noise. Next ones are original images and then images with detected features.
My goal is to use information to find change in robot's position over time.
I used
GaussianBlur( currImageDepth, currImageDepth, Size(9,9), 0, 0);
for noise but it didn't help.