3

I use opencv 3.0.0 beta.

I calibrated my camera and I would like project image points (2D) to points in 3D.

I saw the function :void fisheye::projectPoints(InputArray objectPoints, OutputArray imagePoints, InputArray rvec, InputArray tvec, InputArray K, InputArray D, double alpha=0, OutputArray jacobian=noArray()) but I would like the opposite projection.

I know :

  • cameraMatrix,
  • distortion coefficients,
  • translation vector,
  • rotation vector and
  • the distance between my object 3D point and the 3D origin.

How can I do this?

artoon
  • 729
  • 2
  • 14
  • 41
  • 1
    Isnt possible... How would you do it? The only way to do it is to create a ray through the pixel and say that the object point is "somewhere" on that ray. If you know the depth range between camera and object, you can say "it is at THIS position on the ray" ... so if you dont have some additional (e.g. depth) information, or multiple intersecting rays (= multiple cameras) for that pixel, you CANT "project" from image to object!! – Micka Mar 04 '15 at 11:50
  • because projection from object to image reduces information (the distance) which cant be given reconstructed ;) – Micka Mar 04 '15 at 11:51
  • Ok, I did not think to that. And if object must be on a circular surface (represented by a mesh), it would be possible ? – artoon Mar 04 '15 at 13:12
  • ah well, if you know the intrinsic+extrinsic parameters of the camera and you know the mesh (position), you can shoot the same kind of ray and compute the intersection to the mesh(es) as object point of that pixel! – Micka Mar 04 '15 at 13:34
  • Sorry, I edited my question. In fact, I have not the mesh but only the distance between my object 3D and the 3D origin. So, could opencv give me the ray direction ? – artoon Mar 04 '15 at 16:13
  • let me guess, you dont have a mesh but a some point cloud? if you have ne normal vectors too, it might be enough. with "mesh" i meant some 3D object that the ray could hit, so if you you know where your circular surface is placed, you can hit it with a ray, right? After all, openCV doesnt compute all that for you, you would have to implement ray shoot and intersection computation yourself, I guess. – Micka Mar 04 '15 at 16:30

3 Answers3

2

If you know the 3D geometry of the object and the corresponding 2D image points then you are able to find the object pose from the 3D-2D point correspondences.

You need to know:

  • objectPoints: Array of object points in the 3D object coordinate space,
  • imagePoints – Array of corresponding 2D image points,
  • as well as the camera matrix and distortion coefficients

then solvepnp() will estimate rvec together with tvec which brings points from the model coordinate system to the camera coordinate system.

Kornel
  • 5,264
  • 2
  • 21
  • 28
  • 1
    How to find out objectPoints and imagePoints after homography calculation with ORB matching ? – Nuibb Nov 04 '18 at 06:47
2

"Image points" are the coordinates of the grid corners in the image. Those are given in pixel scale.

"Object points" are the coordinates of the grid corners in "object space", that is, their position relative to each other.

So for example, you could have the top-left corner at image coordinates (127, 265), and its object coordinates would be (0, 0), since the top-left corner is the first across both axes. The next corner to its right could have coordinates (145, 263), and its object coordinates would be (1, 0) (that is, the corner on the second column, first row), etc.

JonathanMitchell
  • 400
  • 1
  • 2
  • 12
1

To resolve my problem, I implemented my own reprojection function. This function is the inverse of fisheye::projectPoint. It is specific to my problem because the distance between my 3D point and the origin is known.

Thanks Micka for yours comments.

artoon
  • 729
  • 2
  • 14
  • 41