2

I have problems understanding if I get an euclidean reconstruction result or just a projective one. So at first let me tell you what I've done:

I have two stereo images. The images are SEM images and are eucentrically tilted. The difference of tilt is 5°. Using SURF-correspondences and RANSAC, I calculate the fundamental matrix with the normalized 8-point algorithm. Then the images are rectified and I do a dense stereo-matching:

minDisp = -16
numDisp = 16-minDisp
stereo = cv2.StereoSGBM_create(minDisparity = minDisp,
                               numDisparities = numDisp)
disp = stereo.compute(imgL, imgR).astype(np.float32) / 16.0

That gives me a disparity map, f.e. this 5x5 matrix (the values range from -16 to 16). I mask the bad pixels out (-17) and compute the z-component of my images using the flattened disp array.

                -0.1875 -0.1250 -0.1250  0
                -0.1250 -0.1250 -0.1250 -17
    disp =      -0.0625 -0.0625 -0.1250 -17
                -0.0625 -0.0625  0       0.0625
                 0       0       0.0625  0.1250

#create mask that eliminates the bad pixel values ( = minimum values)
mask = disp != disp.min() 
dispMasked = disp[mask]

#compute z-component
zWorld = np.float32(((dispMasked) * p) / (2 * np.sin(tilt)))

It's a simplified form of a real triangulation assuming a parallel projection using trigonometric equations. The pixelconstant was calculated with a calibration object. So I get the height in mm. The disparity was calculated in pixels. The results of the point cloud look quite good but I have a small constant tilt of all points. So the created pointcloud(-plane) has a tiltangle.

My question is now, is this point cloud in real euclidean coordinates or do I have a projective reconstruction ( equal to affine reconstruction? ) result that still differs from an euclidean result (unknown transformation between euclidean and projective result)? The reason why I ask is that I don't have a real calibration matrix and I didn't use a real triangulation method using central projection with camera center coordinates, focal length and image point coordinates.

Any suggestions or literature are appreciated. :)

Best regards and thanks in advance!

Miau
  • 301
  • 1
  • 3
  • 12
  • Nice Q, but post some code here and add the programming language to keywords. – Dalen Nov 09 '17 at 20:38
  • Hey, I used Matlab and Python. I didn't post the code because I think the post will get too long and people will hesitate to read it. – Miau Nov 09 '17 at 23:24
  • 1
    You are on SO, which means, words without code are little more than nothing. Of course you won't post irrelevant pieces. Just ones that show how your point cloud is calculated. You may post it with abbreviated examples e.g. use two 8x8 matrices. How on Earth should I guess whether your result is projected or not. It most probably is. So, without seeing the code, I can just tell you that and be done. – Dalen Nov 09 '17 at 23:56
  • Ok, I added some example code which I think should be sufficient. If not tell me what you need to know! I think that the important things are that rectified images are used and that with these rectified images a disparity map is calculated. The point cloud is generated using the disparity and the equation I posted above. – Miau Nov 10 '17 at 10:34

0 Answers0