0

I'm having some trouble understanding how to make use of aruco.estimatePoseSingleMarkers. Currently I have an image or an aruco tag modeled and my camera that is mounted on a drone in Gazebo sim that is flying to the tag. The tag is identified and I estimate the pose of the markers. What I assumed was happening was that TVecs was returning X,Y,Z of the marker in relation to my camera. From that I properly convert the coordinate frames and use that to command my drone. However, I found that the drone using the error moves itself so that the aruco tag is at the 0,0 of the image.

Am I confusing the functionality of the estimatePoseSingleMarker function? What would I need to do to make this work perhaps?

Here is a snippet of the code I'm using. Theres a bit of code to take the camera coordinates and convert them to world coordinate offset from the drone based on the YPR of the drone.

 (corners, ids, rejected) = cv2.aruco.detectMarkers(img, self.alg.ARUCO_DICT, parameters=self.alg.ARUCO_PARAMS)

    if ids is not None:
        for id in ids:
            #Set the length of the ID detected.
            if(id[0] == 20):
                aruco_len = 0.25
                #Get the rotation vec and translation vec of the camera to the aruco I believe. can use this to control the quad.
                rvecs, tvecs = cv2.aruco.estimatePoseSingleMarkers(corners[0], aruco_len, self.alg.camera_matrix, self.alg.camera_dist)

                print tvecs

                cam_pts = np.array([[-tvecs[0][0][1]], 
                                     [tvecs[0][0][0]],
                                     [tvecs[0][0][2]],
                                    [1]])

                print cam_pts

                error = self.cam2Local(cam_pts)

def cam2Local(self, cam_pts):
    #Here in the function build the Rot matrix of the quadcopter, take XYZ points from camera, [y x z 1] format. Rot_Mat * Cam_Pts will give Aruco wrld pts
    

    roll  = self.roll
    pitch = self.pitch
    yaw   = self.yaw 

    #print str("Pitch: " + str(pitch) + " Roll: " + str(roll) + " Yaw: " + str(yaw))
    rot_mat = np.array([[math.cos(roll)*math.cos(pitch),  (math.cos(roll)*math.sin(pitch)*math.sin(yaw))-(math.sin(roll)*math.cos(yaw)), math.cos(roll)*math.sin(pitch)*math.cos(yaw)+math.sin(roll)*math.sin(yaw), 0],
                        [math.sin(roll)*math.cos(pitch),  (math.sin(roll)*math.sin(pitch)*math.sin(yaw))+(math.cos(pitch)*math.cos(yaw)), math.sin(roll)*math.sin(pitch)*math.cos(yaw)-math.cos(roll)*math.sin(yaw), 0],
                        [              -math.sin(pitch),                                              math.cos(pitch)*math.sin(yaw),                                             math.cos(pitch)*math.cos(yaw), 0],
                        [                             0,                                                                          0,                                                                         0, 1]])
    Local_pts = np.matmul(rot_mat, cam_pts)

    #print rot_mat

    return Local_pts 
  • 1
    you understand `tvec` correctly... I hope. the coordinate system is right-handed, and the camera's x is right, y down, z far. on the marker, x right, y far, z up, origin center. -- I can't comment on that code, except to say that I find it awful. especially those matrices containing impossible-to-check expressions. take it apart. write utility functions that do _ONE_ thing, such as rotating around _one_ axis. compose transformations (multiply matrices). in fact, just use `cv.Rodrigues` and some numpy to convert rvec and tvec into a 4x4 pose matrix. – Christoph Rackwitz Nov 21 '21 at 21:08
  • So I'm not confusing anything. If my aruco marker is cetered in the middle of my camera frame but 2m away. tvecs should output [0, 0, 2], correct? Why I'm confused is my output of tvecs I would expect to show roughly [0,0,2] for my tvecs as the camera is 2m directly below the aruco markers. But I get like [0.5, 1,2]. I'm not concerned with the orientation as much as I am with just the XYZ location of it relative to the camera center. As for the Rot mat, I have scripts in matlab I used to verify and used the same inputs to verify this here, but yeah I should split it up a bit, Thanks. – Wilson Lysford Nov 22 '21 at 00:43
  • 1
    yes, [0,0,2], assuming you give aruco your marker sizes in meters (if millimeters, then 2000). if you get an offset, it's probably not centered in the picture, or your camera calibration is bad. override cx, cy with (width-1)/2 and (height-1)/2 – Christoph Rackwitz Nov 22 '21 at 00:59
  • Thank you sir I solved my issue. It turns out in my camera settings for the sim the width and height were 320x240. The Cx, Cy were 320.5, 240.5 respectively. After replacing Cx and Cy with W-1/2 and H-1/2 I know get my expected values. I appreciate you taking the time to try and help and pointing me in the right direction as I had not thought to look there. Much appreciated @ChristophRackwitz – Wilson Lysford Nov 23 '21 at 00:06
  • please remember that experience for next time. there was no indication that the camera matrix would be an issue. I had to make a wild guess because there was no information to *spot* an issue in. you should not keep things a secret. present *everything*, concretely, without prompting. no prose, prose isn't debuggable. most people have some compulsion to remain vague and abstract. that's bad. I am unhappy because this is a constant issue with almost everyone and I keep getting proven right. – Christoph Rackwitz Nov 23 '21 at 09:59

0 Answers0