1

We've been asked to do 3D reconstruction (masters module for PhD), and I'm pulling my hair out. I'm not sure if I'm missing any steps, or if I've done them wrong. I've tried to google the code, and replace their functions with mine, just to see if I can get correct results from that, which I can't.

I'll just go through the steps of what I'm doing so far, and I hope one of you can tell me I'm missing something obvious:

Images I'm using: https://i.stack.imgur.com/44vdI.jpg

  • Load calibration left and right images, click on corresponding points to get P1 and P2

  • Use RQ decomp to get K1 & K2 (and R1, R2, t1, t2, but I don't seem to use them anywhere. Originally I tried doing R = R1*R2', t = t2-t1 to create my new P2 after setting P1 to be canonical (I|0), but that didn't work either).

  • Set P1 to be canonical (I | 0)

  • Calculate fundamental matrix F, and corresponding points im1, im2 using RANSAC.

  • Get colour of pixels at the points

  • Get essential matrix E by doing K2' * F * K1

  • Get the 4 different projection matrices from E, and then select right one

  • Triangulate matches using P1, P2, im1, im2 to get 3D points

  • Use scatter plot to plot 3D points, giving them the RGB value of the pixel at that point.

  • My unsatisfactory result:

    http://imgur.com/OZXXBEC

At the moment, since I'm not getting ANYWHERE, I'd like to go for the simplest option and work my way up. FYI, I'm using matlab. If anyone's got any tips at all, I'd really love to hear them.

Dima
  • 38,860
  • 14
  • 75
  • 115
Gentatsu
  • 696
  • 1
  • 6
  • 26
  • It feels like even if the points were rigth, you would notice. those are really few points. – Ander Biguri Jan 15 '16 at 16:21
  • Hey Ander, it's me, Naval! What do you mean? The image I put up before was wrong (I tried running on the calibration images instead, to no avail), I changed the image with the one using the scene images. – Gentatsu Jan 15 '16 at 16:34
  • I mean that in that result image, the amount of points its quite small for a 3D model (or so it seems!). How are you Choosing the points that you triangulate? How is that last figure plotted? I might be understanding it wrong, but isn't that supposed to be a 3D plot? – Ander Biguri Jan 15 '16 at 16:37
  • I use `detectSURFFeatures, extractFeatures, matchFeatures` to get the points, run it through RANSAC to get F, and get rid of outliers. Then I use `scatter3(X(:,1),X(:,2),(X:,3))` to plot them. It is, I rotated it now and changed the image. – Gentatsu Jan 15 '16 at 16:44
  • SURF features are quite good because they are "consistent" in the imaging. They are good points to match between the images and get a good fundamental matrix. Once you get the fundamental matrix it means that you know the transform from one image to the other, so you can then choose any amount of points to transform from one image to the other! Try to get a dense point cloud after this step, so you can see better of the whole model has been properly reconstructed. I still believe that there are too few points to get to a conclusion of the correctness of that result. – Ander Biguri Jan 15 '16 at 16:49
  • To add a bit, I believe that often people get **at least** a point per pixel in the model. – Ander Biguri Jan 15 '16 at 16:56
  • When you say get a dense point cloud, do you mean I should perhaps do a bounding box of my matches and then use all of the points inside it? I thought it should still give me a spare lookalike, you know? – Gentatsu Jan 15 '16 at 17:02
  • I mean something like: detect the object in image 1, and transform (using the fundamental matrix) all of its points to image 2, then triangulate these to get 3D points. You should be able to plot >10k points in that 3D scatter. Example: http://markmckellar.com/blog/wp-content/uploads/2014/02/radiohead01.png You want somethign as dense as this! – Ander Biguri Jan 15 '16 at 17:06
  • Wow! Is there something I can use in matlab to detect the object easily enough? I ran the code using my original left and right calibration images (with manual input to points), and it gave me the right result, but not if I read the file in, and use SURF to get the features, etc. – Gentatsu Jan 15 '16 at 17:12
  • If you want something quick just to test I suggest you segment the gray table and the white background using something easy (such as region growing), and just select the rest of points. http://uk.mathworks.com/matlabcentral/fileexchange/19084-region-growing – Ander Biguri Jan 15 '16 at 17:16

2 Answers2

1

Turns out to be a weird reason why it wasn't working. I was using matlab's detectSURFFeatures, which gives inaccurate matching pairs. I never assumed it to be wrong, but one of my coursemates had the same issue. I changed it to use detectEigenMinFeatures instead and it works fine! Here's my result now, it's not perfect, but it's much, much better:

enter image description here

Gentatsu
  • 696
  • 1
  • 6
  • 26
  • SIFT features are better, but they are patented :(. Still, transform the whole image, it will look way cooler! ;) – Ander Biguri Jan 19 '16 at 09:07
  • How would I go about this? I get the min and max of my sparse corresponding points, and then try to do im2Coords = F' * im1Coords, but that doesn't work. Do I need to normalise/denormalise or am I doing it all wrong? – Gentatsu Jan 22 '16 at 15:20
  • I guess it gives me the epipole line as opposed to the point, but how would I get more points, then? – Gentatsu Jan 22 '16 at 15:23
  • 1
    Ah, it's ok, it's due in for today, so I won't have time to make many more changes and write about them! – Gentatsu Jan 22 '16 at 15:58
0

If you already have P1 and P2, then you can simply triangulate matching pairs of points from the two images. There is no need to estimate the fundamental matrix.

If you only have the intrinsics (K for a single camera, or K1 and K2 for two different cameras), then you approach is valid:

  1. Estimate fundamental matrix
  2. Get essential matrix
  3. Decompose E into R and t
  4. Set P1 to canonical, and compute P2 from K, R, and t.
  5. Triangulate matching points using P1 and P2.

This approach is illustrated in an example in the Computer Vision System Toolbox.

In either case, you should check your code carefully, and make sure all the matrices make sense. MATLAB's convention is to multiply a row vector by a matrix, while many textbooks multiply a matrix by a column vector. So matrices may need to be transposed.

And before that, plot your point matches using showMatchedFeatures to make sure those make sense.

Dima
  • 38,860
  • 14
  • 75
  • 115
  • I calculate P1 and P2 from the my calibration images, and then K1 and K2 from P1 and P2. That's pretty much exactly what I'm doing. Yeah, that's why I've been trying to use other people's code in places where I might've screwed up, but it seems to be all ok. My features are only points on the object in the scene, so that's fine. For the calibration image, they're a bit more noisy. – Gentatsu Jan 15 '16 at 18:16
  • When I set P1 to [R t], do I multiply it with K1 (K1 * P1), and likewise with retrieved P2 from essential matrix, do I do P2 = K2*P2? – Gentatsu Jan 15 '16 at 20:38