6

I created a simple test application to perform translation (T) and rotation (R) estimation from the essential matrix.

  1. Generate 50 random Points.
  2. Calculate projection pointSet1.
  3. Transform Points via matrix (R|T).
  4. Calculate new projection pointSet2.
  5. Then calculate fundamental matrix F.
  6. Extract essential matrix like E = K2^T F K1 (K1, K2 - internal camera matrices).
  7. Use SVD to get UDV^T.

And calculate restoredR1 = UWV^T, restoredR2 = UW^T. And see that one of them equal to initial R.

But when I calculate translation vector, restoredT = UZU^T, I get normalized T.

restoredT*max(T.x, T.y, T.z) = T

How to restore correct translation vector?

oarfish
  • 4,116
  • 4
  • 37
  • 66
Vie
  • 823
  • 1
  • 10
  • 18

1 Answers1

2

I understand! I don't need real length estimation on this step. When i get first image, i must set metric transformation (scale factor) or estimate it from calibration from known object. After, when i recieve second frame, i calculate normilized T, and using known 3d coordinates from first frame to solve equation (sx2, sy2, 1) = K(R|lambdaT)(X,Y,Z); and find lambda - than lambdaT will be correct metric translation...

I check it, and this is true/ So... maybe who know more simple solution?

Vie
  • 823
  • 1
  • 10
  • 18
  • This sounds like the solution proposed by Bae et al. in *Computational Re-Photography* (2010), I have googled around and found no different approach. – oarfish May 30 '15 at 07:31