0

I have 9 pictures taken with an uncalibrated natural camera (which should means f_x = f_y = f). The intrinsic parameters are all the same for all the pictures. The pictures are images of the same 3D world tooken with different 3D orientation and 3D position. I can spot on the images several 2D points of 3D point well known (in total i can spot 10 points for each image).

It is asked to find the intrinsic parameters, the extrinsic parameters of all the pictures (So, 3D position of the camera within its 3D Rotation).

Is there a fixed algorithm?! It is also asked to use normalized image coordinates in order to reduces numerical errors (i'm not sure, this means that the cordinates in projective geometry should have w = 1? or the module = 1?!). And more over I should use more than the minimal number of picture, in order to reduce the effect of noise...

P.s I don't know if should be an advantage to say that all the 3D point that i know and that i can spot on the images, they belong to the same plane, and i also can spot 4 different lines that in the 3d space they are ortogonal two by two (making a rectangle)..

Thankyou for your time

Sam
  • 313
  • 4
  • 21
  • So what exactly do you want to find? Camera positions (and orientations) for all 9 pictures? It looks like point set registration problem to me. – Archie Dec 28 '14 at 19:30
  • Yes... How can i do that?! and what if i have several images with different position and rotation of the same 3D scene without knowing the 3D Point, but only exploiting some point that are the same above the different pictures?! – Sam Dec 29 '14 at 17:57
  • There are various point set registration algorithms out there. Your problem can be reduced to a regression analysis: vary cameras transformations and re-project the points (from 3D space to camera view plane) till you hit the lowest distance error between projected points and those on your images. – Archie Dec 29 '14 at 23:40

0 Answers0