1

I am currently getting the pose of an apriltag object via solvePnP() and projecting the points with projectPoints()

This is called on a videoStream, so to try to optimise solvePnP(), I am trying to get the previous pose (pose of object in the previous frame), and pass in that pose to solvePnP() for the current frame.

Here is the code:

# This function takes the image frame, and previous rotation and translation vectors as params: img, pvecs, tvecs

# If 1 or more apriltags are detected
if num_detections > 0:
    # If the camera was calibrated and the matrix is supplied
    if mtx is not None:
        # Image points are the corners of the apriltag
        imagePoints = detection_results[0].corners.reshape(1,4,2) 
        
        # objectPoints are obtained within another function

        # If pose is None, call solvePnP() without Guessing extrinsics
        if [x for x in (prvecs, ptvecs) if x is None]:
            success, prvecs, ptvecs = cv2.solvePnP(objectPoints, imagePoints, mtx, dist, flags=cv2.SOLVEPNP_ITERATIVE)
        else:
        # Else call solvePnP() with predefined rvecs and tvecs
            print("Got prvecs and tvecs")
            success, prvecs, ptvecs = cv2.solvePnP(objectPoints, imagePoints, mtx, dist, prvecs, ptvecs, True, flags=cv2.SOLVEPNP_ITERATIVE)

        # If the pose is obtained successfully, the project the 3D points 
        if success:
            imgpts, jac = cv2.projectPoints(opointsArr, prvecs, ptvecs, mtx, dist)
      
            # Draw the 3D points onto image plane
            draw_contours(img, dimg, imgpts)

Within the video streaming function:

# Create a cv2 window to show images
window = 'Camera'
cv2.namedWindow(window)

# Open the first camera to get the video stream and the first frame
cap = cv2.VideoCapture(0)
success, frame = cap.read()

if dist is not None:
    frame = undistort_frame(frame)

prvecs = None
ptvecs = None
# Obtain previous translation and rotation vectors (pose)
img, dimg, prvecs, ptvecs = apriltag_real_time_pose_estimation(frame, prvecs, ptvecs)

while True:

    success, frame = cap.read()
    if dist is not None:
        frame = undistort_frame(frame)

    if not success:
        break
    
    # Keep on passing the pose obtained from the previous frame
    img, dimg, prvecs, ptvecs = apriltag_real_time_pose_estimation(frame, prvecs, ptvecs)

I would now like to get the velocity and acceleration of the pose and pass that in as well to solvePnP().

For pose velocity, I know that I would just have to subtract the previous translation vector from the current translation vector, but I am not sure how to go about it for the rotation matrix (obtained via Rodrigues(), as well as obtaining the acceleration. I am thinking that maybe I would have to get the angle between the two rotation matrices, and the difference would give the change and thus the angular velocity? Or is is as simple as also finding the difference between the rotation vectors from solvePnP()?

My questions are:

  1. How do I obtain the velocity between the poses in terms of the rotation matrix, and is the method of just subtracting the previous translation vector from the current one correct in terms of getting translation velocity?
  2. How do I get the acceleration for the translation vectors and rotation matrices?
  3. Is the method I am using to obtain the previous pose the best way?
  4. For the acceleration, since it is just the change in velocity, would it be wise to just track the velocity obtained between frames and get the difference to get the acceleration?

Any help would be greatly appreciated!

Zoe
  • 31
  • 5

0 Answers0