Questions tagged [feature-tracking]

Feature tracking is a fundamental part in computer-vision and consists in extracting information from images from a video that is likely to be tracked from frame to frame.

Feature tracking is a fundamental part in computer-vision and consists in extracting information from images from a video that is likely to be tracked from frame to frame.

A previous step to feature tracking is feature extraction. In pattern recognition and in image processing, feature extraction is a special form of dimensionality reduction.

When the input data to an algorithm is too large to be processed and it is suspected to be notoriously redundant (e.g. the same measurement in both feet and meters) then the input data will be transformed into a reduced representation set of features (also named features vector).

Transforming the input data into the set of features is called feature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input.

Types of image features

  • Edges Edges are points where there is a boundary (or an edge) between two image regions. In general, an edge can be of almost arbitrary shape, and may include junctions. In practice, edges are usually defined as sets of points in the image which have a strong gradient magnitude. Furthermore, some common algorithms will then chain high gradient points together to form a more complete description of an edge. These algorithms usually place some constraints on the properties of an edge, such as shape, smoothness, and gradient value. Locally, edges have a one dimensional structure.

  • Corners / interest points The terms corners and interest points are used somewhat interchangeably and refer to point-like features in an image, which have a local two dimensional structure. The name "Corner" arose since early algorithms first performed edge detection, and then analysed the edges to find rapid changes in direction (corners). These algorithms were then developed so that explicit edge detection was no longer required, for instance by looking for high levels of curvature in the image gradient. It was then noticed that the so-called corners were also being detected on parts of the image which were not corners in the traditional sense (for instance a small bright spot on a dark background may be detected). These points are frequently known as interest points, but the term "corner" is used by tradition.

  • Blobs / regions of interest or interest points Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors often contain a preferred point (a local maximum of an operator response or a center of gravity) which means that many blob detectors may also be regarded as interest point operators. Blob detectors can detect areas in an image which are too smooth to be detected by a corner detector. Consider shrinking an image and then performing corner detection. The detector will respond to points which are sharp in the shrunk image, but may be smooth in the original image. It is at this point that the difference between a corner detector and a blob detector becomes somewhat vague. To a large extent, this distinction can be remedied by including an appropriate notion of scale.

  • Ridges For elongated objects, the notion of ridges is a natural tool. A ridge descriptor computed from a grey-level image can be seen as a generalization of a medial axis. From a practical viewpoint, a ridge can be thought of as a one-dimensional curve that represents an axis of symmetry, and in addition has an attribute of local ridge width associated with each ridge point. Unfortunately, however, it is algorithmically harder to extract ridge features from general classes of grey-level images than edge-, corner- or blob features. Nevertheless, ridge descriptors are frequently used for road extraction in aerial images and for extracting blood vessels in medical images.

17 questions
5
votes
3 answers

Feature detector and descriptor for low resolution images

I am working with low resolution (VGA) and jpg-compressed sequences of images for visual navigation on a mobile robot. At the moment I am using SURF for detecting keypoints and extracting descriptors out of the images, and FLANN for tracking them. I…
5
votes
0 answers

Filtering MatOfDMatch

Refer to http://docs.opencv.org/2.4.2/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html At some point in my code I invoke myDescriptorMatcher.match(descriptors, result); Now, if I want to filter the resulting matches, I…
4
votes
2 answers

Android: Using calcOpticalFlowPyrLK with MatOfPoint2f

I have been unable to use calcOpticalFlowPyrLK with MatOfPoint2f. I declared my types as follows: private Mat mPreviousGray; // previous gray-level image private List points; // tracked features private…
user1378489
  • 125
  • 2
  • 8
3
votes
1 answer

Feature tracking not working correctly on low Resolution images

I am using SIFT for feature detection and calcOpticalFlowPyrLK for feature tracking in images. I am working on low resolution images (590x375 after cropping) taken from Microsoft kinect. // feature detection cv::Ptr detector =…
Ruturaj
  • 630
  • 1
  • 6
  • 20
2
votes
2 answers

Why no feature is being tracked using KLT-tracking in Matlab?

I am trying to track some features (extracted using multiscale-harrys detector) between two frames using the Kanade-Lucas-Tomasi (KLT) algorithm using the functions you can find here (Mathworks documentation). I cannot understand what goes wrong.…
user942458
  • 141
  • 1
  • 1
  • 5
2
votes
2 answers

Feature tracking WinForms

I would like to extend my WinForms app, which a feature that allows me to monitor which functions are used by the users. The idea is to count how many times e.g. a button has been clicked, or a popup was opened. I want to know which features are…
Martin Moser
  • 6,219
  • 1
  • 27
  • 41
1
vote
2 answers

KLT tracker in OpenCV not working properly with Python

I am using KLT (Kanade-Lucas-Tomasi Tracking) Tracking algorithm to track the motion of traffic in India. I am tracking flow of one side of traffic properly, but other side of traffic, that is moving in frame is not detected at all. Algorithm…
1
vote
1 answer

Re-establishing new feature points using Matlab's vision functions

I am using Matlab's built in vision functions and pre-made example code to track feature points. In my sample video, the camera pans horizontaly, introducing new objects and scenery to the field of view while previous objects and scenery move out of…
1
vote
0 answers

Tracking motion blurred objects in an image

I an trying to track the location of a moving object in an image. My camera has very low sensitivity, resulting in long exposure times so my object becomes heavily motion-blurred. I am trying to track it with NCC (correlation) using opencv function…
Ysch
  • 752
  • 2
  • 9
  • 24
1
vote
1 answer

Inacurate tracking when drawing calcOpticalFlow's outputed feature vector

I have been trying to develop a simple feature tracking program. The user outlines an area on the screen with their mouse, and a mask is created for this area and passed to goodFeaturesToTrack. The features found by the function are then drawn on…
szakeri
  • 149
  • 2
  • 9
0
votes
0 answers

In Lucas-Kanade Optical Flow Method, what can be the maximum Euclidean Distance between tracked point pairs?

I am using Lucas-Kanade Optical Flow Method to track the points from one image to the next one. In OpenCV, there is an easy to use function: void cv::calcOpticalFlowPyrLK ( InputArray prevImg, InputArray nextImg, InputArray …
Milan
  • 1,743
  • 2
  • 13
  • 36
0
votes
1 answer

Real time keypoint detection algorithm

I need to measure the speed a conveyor belt under a surveillance camera. After years of wearing the belt is basically texture-less, it's even difficult to see whether the belt is moving if nothing is on top it. I'm trying to solve this problem as an…
user416983
  • 974
  • 3
  • 18
  • 28
0
votes
1 answer

CV4.1: Failed Assertion in function detectAndCompute level>=0

I'm curently working on a little algorithm using ORB. It has to recalculate keypoints and descriptors at some point, since their location and size change. However, calling detectAndCompute with the "useExistingKepoints"-flag on, fails at the…
A13XI5
  • 27
  • 6
0
votes
1 answer

Why we need a coarse-to-fine strategy to solve the optical flow problem (feature tracking) in practice?

Why we need a coarse-to-fine strategy to solve the optical flow problem (feature tracking) in practice? If we do not use such kind of methods, what will happen?
0
votes
3 answers

Shape tracking after filtered detection

I am using a kinect to do computer visual software. I have my program setup so that it filters out everything beyond a certain distance, and once something enters close enough that is large enough to be a hand, my program assumes it is. However, I…
Malfist
  • 31,179
  • 61
  • 182
  • 269
1
2