1

I am working on estimating the pose of an object with apriltags attached to them.

I have initially done this successfully for an apriltag board:enter image description here

The 3D points were found using the tag radius, (tag_size/2) as shown in the code:

ob_pt1 = [-tag_size/2, -tag_size/2, 0.0]
ob_pt2 = [ tag_size/2, - tag_size/2, 0.0]
ob_pt3 = [ tag_size/2,  tag_size/2, 0.0]
ob_pt4 = [-tag_size/2,  tag_size/2, 0.0]
ob_pts = ob_pt1 + ob_pt2 + ob_pt3 + ob_pt4
object_pts = np.array(ob_pts).reshape(4,3)

Now I have to estimate the pose for an object with apriltags attached to them. I now have the known initial pose (rotation and translation vectors) for the apriltags stuck on an object.

I have used Rodrigues on the rotation vectors to get the rotation matrix. Using this: enter image description here

I know I have to also apply the translation vector to the rotation matrix. And this would be the 3x4 matrix, that is the pose. And I know the the Z would always be '0', but I feel lost on how to go about using this for the 3D points.

My question is:

  1. How do I use these known extrinsics to get the 3D points for this object? Do I solve for X and Y and just apply Z as 0? How can I go about doing this?

Any help would be greatly appreciated!

Zoe
  • 31
  • 5
  • 1
    If you detect the apriltag, you get the 6dof pose of apriltag in camera frame. If you know the extrinsic parameters of camera you can convert pose in camera frame to world frame. What is your exact question? – nayab Jun 11 '21 at 07:40
  • @nayab Thank you so much for you response! I know the extrinsic parameters for the object itself, so I have the rotation and translation vectors of each of the apriltags in relation to the other apriltags on the object. I'm very new to opencv and I'm not sure how to go about using those extrinsic parameters to get the object points. This would be before passing them to solvePnP() to get the pose. Do I also convert the pose like you said for this instance as well? Also, please bear with me, when you say to convert the pose, what exactly do you mean? – Zoe Jun 11 '21 at 08:36
  • 1
    You need to detect the centers of apriltags in the image and use the coordinate sof the apriltag center as image points in solvePnp(). Make sure that they are matching in order with the order of 3D object points. You can use apriltag Id. The ouput from solvePnP() is the pose of the object center. – nayab Jun 11 '21 at 14:39
  • @nayab Thank you so much! I got through with getting the pose of the apriltags! Is there a way to get the pose of the object with multiple apriltags attached to it instead of the individual apriltags? Right now, I call solvePnP() on the tags it detects, then, taking the pose from that, I pass it to projectPoints() along with the object points I find for all apriltags (in order to display the entire object onto the openCV window). – Zoe Jun 12 '21 at 13:35
  • @nayab So sorry for the long comment, this is the rest of it-> This results in it correctly showing the pose of the apriltags it detects as expected, and in the form of the object, but what I would like it to do, is to show the form of the object overlaid onto the image, and the pose of that object. – Zoe Jun 12 '21 at 13:37
  • @nayab t's working great now, I just needed to re-calibrate my camera and get the proper distortion values to pass to solvePnP() and projectPoints() :D The overlay drawing is still a little wonky, as it changes due to a different apriltags being detected that's changing the pose but it's so much better now :D – Zoe Jun 12 '21 at 14:32
  • 1
    Its good that you are able to find object pose. You can find 3d pose of apriltag directly. – nayab Jun 12 '21 at 20:49
  • @nayab Thank you!!! :D When you say directly, do you mean just passing the image and object points to solvePnP() and the the returned rvecs and tvecs to projectPoints based on each apriltag detection? Also, do you know if there is a way to get the pose velocity? I am currently finding the difference between the previous and current frames pose for the translation vector to get translation velocity, but I'm not sure how to get it for the rotation matrices. – Zoe Jun 16 '21 at 15:47
  • @nayab I am thinking that maybe I would have to get the angle between the two rotation matrices, and use that to get the angular velocity? – Zoe Jun 16 '21 at 15:54
  • 1
    Get tag pose directly, I mean you get 3d pose of the tag in an image directly if your camera is calibrated. Rotation velocity also can be calculated same as you did for linear velocity. – nayab Jun 16 '21 at 16:42
  • @nayab Ah yes, that's what i'm doing! (in relation to getting tag pose directly). And oh! Is it really as simple as subtracting the rotation vectors for the current and previous frames obtained from solvePnP()? (Just making sure! I was going to obtain the angle theta from the rotation matrices obtained from Rodrigues(), and get the difference as the rate of change in terms of the angle). Thank you so much for all your help! I truly appreciate it! – Zoe Jun 16 '21 at 17:03

0 Answers0