1
Mat generateDisparityMap(Mat& left, Mat& right)
{
  Mat lb, rb;
  if (left.empty() || right.empty())
    return left;
  cvtColor(left, lb, CV_BGR2GRAY);
  cvtColor(right, rb, CV_BGR2GRAY);

  const Size imsize = lb.size();
  const int32_t dims[3] = { imsize.width,imsize.height,imsize.width };

  Mat leftdpf = Mat::zeros(imsize, CV_32F);
  Mat rightdpf = Mat::zeros(imsize, CV_32F);

  Elas::parameters param(Elas::MIDDLEBURY);
  param.postprocess_only_left = true;
  Elas elas(param);

  elas.process(lb.data, rb.data, leftdpf.ptr<float>(0), rightdpf.ptr<float>
    (0), dims);


  Mat show = Mat(left.rows, left.cols, CV_8UC1, Scalar(0));
  leftdpf.convertTo(show, CV_8U, 5.);

  int   max_disp = -1;

  for (int i = 0; i < imsize.width; i++) {
    for (int j = 0; j < imsize.height; j++) {
      if (show.at<uchar>(j,i) > max_disp) 
        max_disp = show.at<uchar>(j,i);
    }
  }

  for (int i = 0; i < imsize.width; i++) {
    for (int j = 0; j < imsize.height; j++) {
      show.at<uchar>(j,i) =
      (int)max(255.0*(float)show.at<uchar>(j,i)/max_disp,0.0);
    }
  }

  //return dmap;
  return show;
  //return show;
}

Please have a look at the image of the disparity map generated by my code shown in the link.

I have seen the results generated by the LIBELAS library online and they seemed to be perfect. My code is running without any errors,but I am getting vague distorted maps please let me know of any modifications to my code. I am using visual studio 2017 IDE with opencv 3.3.0 contribute libraries.

EDIT I tried using the code to find the disparity given in the link https://github.com/opencv/opencv_contrib/blob/master/modules/ximgproc/samples/disparity_filtering.cpp . However the disparity map appears to be wrong in some areas. Some objects far from the camera appear to be brighter than the closer objects. I tried to calculate the actual depth by multiplying the disparity value by calibration matrix Q. The depth calculated are way off from the real world measured values. I am confident that the matrix Q is correct since my rectified image seems to be good. My square size value for calibration was also accurate (0.05 meters). My disparity image is in the given link https://photos.app.goo.gl/YWPc6yq7XAmUpkk62 .

This is the added code for calculating the actual depth from the filtered disparity image stored in filtered_disp_vis.

fs1["Q"] >> Q;
    Mat Image;
    Mat V = Mat(4, 1, CV_64FC1);
    Mat pos = Mat(4, 1, CV_64FC1);
    vector< Point3d > points;
    //float fMaxDistance = static_cast<float>((1. / Q.at<double>(3, 2)) * Q.at<double>(2, 3));
    //filtered_disp_vis.convertTo(filtered_disp_vis, CV_64FC1, 1.0 / 16.0, 0.0);
    //imshow("filtered disparity", filtered_disp_vis);


    // outputDisparityValue is single 16-bit value from disparityMap
    // DISP_SCALE = 16
    //float fDisparity = outputDisparityValue / (float)StereoMatcher::DISP_SCALE;
    //float fDistance = fMaxDistance / fDisparity;
    reprojectImageTo3D(filtered_disp_vis, Image, Q, false, CV_32F);
    //cout << Image;
    for (int i = 0; i < filtered_disp_vis.cols; i++)
    {
    for (int j = 0; j < filtered_disp_vis.rows; j++)
    {
    int d = filtered_disp_vis.at<uchar>(j, i);
    //filtered_disp_vis.convertTo(filtered_disp_vis, CV_32F, 1.0 / 16.0, 0.0);



    //int l = img_left.at<uchar>(j, i);
    //cout << "(" << j << "," << i << ")" << "=" << d;
    //out << endl;

    // if low disparity, then ignore
    /*if (d < 2) {
    continue;
    }*/
    // V is the vector to be multiplied to Q to get
    // the 3D homogenous coordinates of the image point
    V.at<double>(0, 0) = (double)(i);
    V.at<double>(1, 0) = (double)(j);
    V.at<double>(2, 0) = (double)d;
    V.at<double>(3, 0) = 1.;
    pos = Q * V; // 3D homogeneous coordinate
    double X = pos.at<double>(0, 0) / pos.at<double>(3, 0);
    double Y = pos.at<double>(1, 0) / pos.at<double>(3, 0);
    double Z = pos.at<double>(2, 0) / pos.at<double>(3, 0);

    if (i == 446 && j == 362)
    {
    cout << "(" << j << "," << i << ")" << " =   ";

    cout << X << " " << Y << " " << Z << " " << d;
    cout << endl;
    }

    Mat point3d_cam = Mat(3, 1, CV_64FC1);
    point3d_cam.at<double>(0, 0) = X;
    point3d_cam.at<double>(1, 0) = Y;
    point3d_cam.at<double>(2, 0) = Z;
    // transform 3D point from camera frame to robot frame
    //Mat point3d_robot = XR * point3d_cam + XT;
    points.push_back(Point3d(point3d_cam));
    }

Where am I going wrong? Any modifications to my snippet or different recommendations to get proper disparity maps with accurate depth values will be appreciated.

1 Answers1

0

I think it is not a LIBELAS problem, but rather a conversion problem. I am not sure of the range of your resulting images. But normally is not a good idea to convert directly CV_32F to CV_8U, you will loose information and it will depend of the range as well...

Also, you normalize the values after converting it to 8U, this may give problems since you lost information and then the max value could be the wrong value.

If it is only to display, you can use the normalize function from OpenCV.

cv::Mat show;
cv::normalize(leftdpf, show, 0, 255, cv::NORM_MINMAX, CV_8U);

This will put in show an image with type CV_8U with the values normalized to fit the range (0-255).

api55
  • 11,070
  • 4
  • 41
  • 57
  • Tried it, still the same vague distorted map – Abhilesh Borode Nov 11 '17 at 02:27
  • @AbhileshBorode Can you post the 2 images you are using as input? have you tried changing the params? – api55 Nov 11 '17 at 18:58
  • My 2 input images https://photos.app.goo.gl/RYq09qMKDMI22NaH2 . I tried changing the subsampling parameter to 1 but it made things worse. I did hit and trial with other parameters since i don't understand them properly, it didn't make much difference. – Abhilesh Borode Nov 12 '17 at 04:59
  • @AbhileshBorode maybe try the other parameters, (ROBOTICS) instead of the middlebury – api55 Nov 12 '17 at 11:06
  • Switching to robotics is giving a very dark incomplete disparity map. – Abhilesh Borode Nov 12 '17 at 20:52
  • could you recommend any other library for real time generation of disparity maps . My final aim is to calculate distance for obstacle avoidance. – Abhilesh Borode Nov 14 '17 at 07:24
  • @AbhileshBorode I have only done disparity maps once. I used [OpenCV functions](https://docs.opencv.org/trunk/d2/d6e/classcv_1_1StereoMatcher.html), you have 3 classes, and one of them is in CUDA which probably is real time. Here you can [find a tutorial](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_calib3d/py_depthmap/py_depthmap.html) (for python opencv but it can be easily ported to c++) for one of the classes. The contrib module has some filters as well, [here is another tutorial](https://docs.opencv.org/3.3.1/d3/d14/tutorial_ximgproc_disparity_filtering.html) this one in c++ – api55 Nov 14 '17 at 08:21
  • i tried the contribute filter module however there are some problems could you please look into my edited post? – Abhilesh Borode Nov 16 '17 at 18:32
  • What I see is that you are using the NORMALIZED image for your calculations.... that is a big no. Normalized image is good for visualization purposes only. You have to use your return value, that one that comes with float numbers. Try it out, and you may get more consistent values. – api55 Nov 17 '17 at 08:34
  • I am sorry but could you be specific since i am not performing any normalizing in the edited code posted. I tried changing the data type for d variable to float but still the values are off by 15cm. – Abhilesh Borode Nov 17 '17 at 23:58
  • `filtered_disp_vis.at(j, i);` you access it as uchar, so either that line is wrong and should be float or you are using this show image from the first code. – api55 Nov 18 '17 at 09:43
  • Just to clarify, 1st and the 2nd codes are completely separate project files. When I write this: filtered_disp_vis.at(j, i); it gives me an exception handling error. any data type except uchar gives me the same error. The second code is the added part to the code mentioned in the EDIT github link – Abhilesh Borode Nov 19 '17 at 06:28
  • are you using `getDisparityVis()` somewhwere, this also creates an uchar for visualization only? – api55 Nov 19 '17 at 10:21
  • Oh yes I am using getDisparityVis() . What should I replace it with then? Since this function returns a filtered output without any noise. – Abhilesh Borode Nov 19 '17 at 19:59
  • I think the input to that function is the correct mat, it should be 16 bit one I think... I have never used the filters in contrib, I have nly used once this, more for testing than for real use. – api55 Nov 19 '17 at 20:17
  • when you used disparity maps you didn't use any filters? Since any methods of map generation i am using has a lot of noise in it. So filters are pretty much mandatory for me – Abhilesh Borode Nov 19 '17 at 21:26
  • As I told you, I used as test only, and I used the data freely available. Most of the time I work with pointclouds directly, but I noticed possible mistakes as the one with uchar.... in this cases uchar images are used only for visualization and they introduce clamping or normalization to display. – api55 Nov 20 '17 at 09:17
  • do you have any idea about how much accuracy is possible for the depth Z after the 3D reconstruction. like is it supposed to be exact or will there always be some inaccuracy – Abhilesh Borode Nov 21 '17 at 00:16