3

The stereo_match.cpp example converts L and R images into disparity and point cloud. I want to adapt this example for compute the disparity and point cloud from 2 consecutive frames of a single calibrated camera. Is it possible? If this example isn't good for my scope, what are the steps for obtain what I want?

Cody Gray - on strike
  • 239,200
  • 50
  • 490
  • 574
Fobi
  • 423
  • 1
  • 6
  • 8

3 Answers3

2

Disparity map, on stereo systems, is used to obtain depth information - distance to objects in scene. For that, you need the distance between cameras, to be able to convert disparity info to real dimensions.

On the other hand, if you have consecutive frames from a static camera, I suppose you want the differences between them. You can obtain it with an optical flow algorithm. Dense optical flow is calculated for each pixel in image, in the same way as disparity, and it outputs the movement direction and magnitude. Most common OF are sparse - they track only a set of "strong", or well-defined points.

It may make sense to obtain disparity algorithms if you have a static scene, but you move the camera, simulating the two cameras in a stereo rig.

Sam
  • 19,708
  • 4
  • 59
  • 82
1

Yes if the camera (or scene) is moving

Martin Beckett
  • 94,801
  • 28
  • 188
  • 263
  • Yes, I move my calibrated camera and take a video of an static object. what are the steps for obtain a disparity map from 2 consecutive frames of this video? – Fobi Jan 31 '12 at 09:58
  • 1
    @Fobi - exactly the same as a stereo rig. Identify matching points (eg SURF) and solve for the camera displacement vector, then for each matching point in the scene you have a disparity and a baseline – Martin Beckett Jan 31 '12 at 13:48
  • solve for the camera displacement vector? Can you be more specific? What OpenCV function I have to use? – Fobi Jan 31 '12 at 19:15
0

I suppose we cannot calculate an accurate disparity map from a single camera. In computing the disparity map we basically assume that the vertical pixel coordinate in both the images in a stereo rig is same, only the horizontal pixel coordinate changes, but in monocular image sequence, this may not hold true as the camera is moving between two consecutive frames.