1

Is that possible to get the depth/disparity map from a moving camera? Let say I capture an image at x location, after I travelled let say 5cm and I capture another picture, and from there I calculate the depth map of the image.

I have tried using BlockMatching in opencv but the result is not good.The first and second image are as following: first image,second image, disparity map (colour),disparity map

My code is as following:

    GpuMat leftGPU;
    GpuMat rightGPU;
    leftGPU.upload(left);rightGPU.upload(right);
    GpuMat disparityGPU;
    GpuMat disparityGPU2;
    Mat disparity;Mat disparity1,disparity2;
    Ptr<cuda::StereoBM> stereo = createStereoBM(256,3);
    stereo->setMinDisparity(-39);
        stereo->setPreFilterCap(61);
        stereo->setPreFilterSize(3);
        stereo->setSpeckleRange(1);
        stereo->setUniquenessRatio(0);
    stereo->compute(leftGPU,rightGPU,disparityGPU);
    drawColorDisp(disparityGPU, disparityGPU2,256);
    disparityGPU.download(disparity);
    disparityGPU2.download(disparity2);
    imshow("display img",disparityGPU);

how can I improve upon this? From the colour disparity map, there are quite a lot error (ie. the tall circle is red in colour and it is the same as some of the part of the table.). Also,from the disparity map, there are small noise (all the black dots in the picture), how can I pad those black dots with nearby disparities?

user9870
  • 45
  • 1
  • 10

1 Answers1

0

It is possible if the object is static.

To properly do stereo matching, you first need to rectify your images! If you don't have calibrated cameras, you can do this from detected feature points. Also note that for cuda::StereoBM the minimum default disparity is 0. (I have never used cuda, but I don't think your setMinDisparity is doing anything, see this anser.)

Now, in your example images corresponding points are only about 1 row apart, therefore your disparity map actually doesn't look too bad. Maybe having a larger blockSize would already do in this special case.

Finally, your objects have very low texture, therefore the block matching algorithm can't detect much.

jodis
  • 158
  • 1
  • 8
  • yea, my object is static. Regarding the calibration of the camera, is that just normal single camera calibration? This is because as I looked up online, I found out that people do stereo vision camera calibration (which is assume it is different compare with mono vision camera system). For block matching, that is also one of the problem that I was struggling on. This is because as you know, since it is a smooth surface it is very hard for the algorithm to identify. Besides Block Matching, do you know any algorithm that can work better under low texture? – user9870 Feb 01 '18 at 22:17
  • You can't really do stereo calibration with one moving camera unless the camera is always moving the exact same vector. So, if you do single camera calibration, you can `undistort`. You could try `cuda::StereoBeliefPropagation` but I don't know this algorithm. If speed is not too important, you can also try [post filtering](https://docs.opencv.org/trunk/d3/d14/tutorial_ximgproc_disparity_filtering.html) – jodis Feb 02 '18 at 01:06
  • Did this help? I believe in OpenCV stereo matching the best you can do about low textured regions is SGBM and post filtering. If that's not enough you might want to check out https://github.com/t-taniai/LocalExpStereo that is currently top in the Middlebury dataset. Or you might consider to use another sensor, for example a time of flight camera like the Kinect v2. – jodis Feb 05 '18 at 22:59
  • 1
    Hi thank you! The stereo belief propagation is too slow...Your comment about the low texture helps alot. I tried using a laplacian mask to sharpen the image (therefore more texture), and now it can give a very good result. On top of that, I use a thresholding method (threshold to zero) to remove the background...But at the same time, it preserve the texture of the image itself. – user9870 Feb 05 '18 at 23:13