0

I know for a 3d reconstruction you can get everything except the scale factor from two images.

But can you calculate where one Point from the first images sits in the second images. The scale factor shouldn't be interesting here?!

sensorCheckPoint.x = (pixelCheckPoint.x - principlePoint.x) / focal;
sensorCheckPoint.y = (pixelCheckPoint.y - principlePoint.y) / focal;
sensorCheckPoint.z = 1;

sesorCheckPointNew = R * sensorCheckPoint + t;

I got R and t by decomposing the Essential Mat with recoverPose(); But the new Point doesn't even sit in the image.

Could someone tell me if I'm thinking wrong? Thanks

EDIT

I only know the pixel coordinates from the checkPoint not the real 3d coordinates

EDIT2

If you know R and t but not the length from t. It should be possible to assume a z1 for a Point M known in both images and then get a the resulting t. Right? Then it should be possible to recalculate for every point in the first image where it sits in the second.

z2 is then in correspondence with t. But what is then the dependency between z2 and t?

enter image description here

enter image description here

enter image description here

EDIT3

If I just assume z=1 for M1. And I calculate R and t from the two images. Then I know everything what green is. Therefore I need to solve this linear equations to get s and to get the real t.

I use the the first two lines two solve the two variables. But the outcome doesn't seem write.

Is the equation not correct?

enter image description here

user3077796
  • 192
  • 1
  • 19
  • Is the content in https://en.wikipedia.org/wiki/Epipolar_geometry already familiar to you? – Rethunk Nov 11 '15 at 03:55
  • Yes I know that you can only get an epi line from a corresponding Point of the first image. But I thought maybe if you don't care about the 3D coordinates and just want to know where the new pixelcoord sits there would be a possibility – user3077796 Nov 11 '15 at 10:12

2 Answers2

1

I think you are close in your understanding of the geometry.

I don't care about the real 3D System. It only has to be relatively correct. So that I can recalculate the position of any point from the first image to the second images?

The position in the second image will depend on the actual 3D position of the point in the first image. So you really have to materialize the pixel into an actual 3D point to do what you want. Without depth information the pixel in image 1 can be anywhere along a line in image 2.

You compute sensorCheckPoint homogenous coordinates. In this setting it can be insightful to view these coordinates as actual 3D coordinates in the system of the camera (at z=1). Imagine yourself at that camera, looking down Z. Consider the ray going from the camera center to this point. Since you know R and t, you can express this ray in world space as well (with a bit of mental gymnastic to find the correct transformation). This ray is just a vector in 3D. You can normalize it and multiply it by a factor to find yourself 3D points anywhere along this ray at known distance from the camera center.

Once you have an actual 3D point in world space, you can use projectPoints() to project it onto the image plane of the other camera.

Joan Charmant
  • 2,012
  • 1
  • 18
  • 23
  • I'm not completely sure if I got write what you mean. Therefore I wrote down the Formula what I think I got. But I still have a Problem. – user3077796 Nov 16 '15 at 14:05
  • Thanks it worked now!! The equation in EDIT 3 is correct. But I forgot on one site of the equation the camera matrix. Therefore the result couldn't be right. – user3077796 Nov 17 '15 at 09:16
0

And what makes you think that a point at z=1 should necessarily be visible in both cameras?

Francesco Callari
  • 11,300
  • 2
  • 25
  • 40
  • I tried to make the senosrcoord in normalized homogeneous coordinates. Isn't there there z = 1? – user3077796 Nov 11 '15 at 10:05
  • I suggest you visualize (draw) what your R and t are telling you. – Francesco Callari Nov 11 '15 at 14:17
  • I think my calculated R is correct: I check with a known rotation of 30° (measured with a tripod) and I got around 28°. Therefore I think my t vektor will be correct as well. But yes I don't know the length of the vector. How do I chose the correct one if I don't care about the real 3D System. It only has to be relatively correct. So that I can recalculate the position of any point from the first image to the second images? – user3077796 Nov 12 '15 at 10:29