2

I've been trying to generate a point cloud from a pair of rectified stereo images. I first obtained the disparity map using opencv's sgbm implementation. I then converted it to a point cloud using the following code,

[for (int u=0; u < left.rows; ++u)
                {
                        for (int v=0; v < left.cols; ++v)
                        {
                                if(disp.at<int>(u,v)==0)continue;
                                pcl::PointXYZRGB p;
                                p.x = v;
                                p.y = u;
                                p.z = (left_focalLength * baseLine * 0.01/ disp.at<int>(u,v));
                                std::cout << p.z << std::endl;
                                cv::Vec3b bgr(left.at<cv::Vec3b>(u,v));
                                p.b = bgr\[0\];
                                p.g = bgr\[1\];
                                p.r = bgr\[2\];
                                pc.push_back(p);
                        }
                }][1]

left is the left image, disp is the output disparity image in cv_16s. Is my disparity to pcl conversion correct or is it a problem with the disparity values?

I've included a screenshot of the disparity map, point cloud and original left image.

Thank you!

screenshot

Karnik Ram
  • 33
  • 6

1 Answers1

0

I'm not confident with this language, but I noticed a thing:
Assuming that this line convert disparty to depth (Z)

p.z = (left_focalLength * baseLine * 0.01/ disp.at<int>(u,v));

What is 0.01? If this calculation gives you a range of depths (Z) from 1 to 10, this factor reduces your range from 0.01 to 0.1. Depth is always close to zero and you have a flat image (flat image = constant depth).

PS I do not see in your code X,Y conversion from u,v pixel values with Z value. Something like

X = u*Z/f
marcoresk
  • 1,837
  • 2
  • 17
  • 29
  • Thanks for your reply, The 0.01 is for converting the focal length and baseline values to cms. I used the opencv function calibrationMatrixValues which outputs those values in mm from the camera intrinsic matrix. However, I'm now using opencv's reprojectImageTo3D function to do the depth-disparity mapping and I'm getting 3D results but it's still severely lacking - https://drive.google.com/file/d/0B_2Iwz6JbA4pVi0tWVEyUFc1Tlk/view?usp=sharing – Karnik Ram Jan 08 '17 at 10:06
  • @KarnikRam I thougth about another (maybe stupid) detail: did you know that sgbm algorithm provide a disparity value 16 times greater? (you have to divide by 16 to retrieve the real value) – marcoresk Jan 08 '17 at 15:12
  • Yes :) You can find my full code here - https://drive.google.com/file/d/0B_2Iwz6JbA4pSHV0TVR2TG9INk0/view?usp=sharing ; you can ignore the LoadMetadata and LoadCalibration functions. – Karnik Ram Jan 09 '17 at 20:13
  • @KarnikRam I am not used to C++ but if these two lines are subsequential, `computeDisparity(left, right, disp);` `cv::reprojectImageTo3D(disp, xyz, Q, false, CV_32F);` where you divide disp by 16? (I see you scaled disp8 from 0 to 255 but it is another thing. You used disp in the 3d reconstruction). – marcoresk Jan 10 '17 at 13:51
  • Hi, I did use disp8 for computeDisparity too. No change. – Karnik Ram Jan 11 '17 at 23:05
  • @KarnikRam of course: disp8 is a scaled, not real value. `Disp8 = disp*255/(numdisparities*16)` , correct? My question was: where is the division by 16 in your code? Try something like `disp2 = disp/16`, then use disp2 to reproject. If that not work, the problem may be elsewhere – marcoresk Jan 12 '17 at 09:46
  • Hey, yes that was tried as well. Not much of a change in the output :/ – Karnik Ram Jan 19 '17 at 05:01