1

I cannot see what I am doing wrong after checking the code a thousand times. The algorithm is very simple: I have a CV_16U image with the disparity values called disp, and I am trying to implement the building of the u and v disparities in order to detect obstacles.

Mat v_disparity, u_disparity;
v_disparity=Mat::zeros(disp.rows,numberOfDisparities*16, CV_16U);
u_disparity=Mat::zeros(numberOfDisparities*16,disp.cols, CV_16U);
for(int i = 0; i < disp.rows; i++)
{
    d = disp.ptr<ushort>(i);    //d[j] is the disparity value
    for (int j = 0; j < disp.cols; ++j)
    {
        v_disparity.at<uchar>(i,(d[j]))++;
        u_disparity.at<uchar>((d[j]),j)++;
    }
}

The problem is that when I use imshow to print both disparities after converting to 8bit Unsigned. The u-disparity is wrong, since it has the shape it should, but it's half the horizontal dimension, being the right pixels black.

agregorio
  • 91
  • 1
  • 11
  • After some debugging, I saw that the value of d[j] is equal to disp.at(i,j) allways. I guess the fault is on the line u_disparity.at((d[j]),j)++; but I can't see it – agregorio Nov 11 '15 at 09:42

1 Answers1

0

I finally figured it out. It was just that I used a wrong template while accessing to the value of the pixels in u and v-disparities. In the v-disparity I didn't detect it since I thought there was no pixels in disp with high disparity values. To sum up, the following lines:

    v_disparity.at<uchar>(i,(d[j]))++;
    u_disparity.at<uchar>((d[j]),j)++;

must be replaced by:

    v_disparity.at<ushort>(i,(d[j]))++;
    u_disparity.at<ushort>((d[j]),j)++;

since both images are CV_16U, and the type uchar is 8 bit, not 16 bit.

agregorio
  • 91
  • 1
  • 11