1

I am new here and am very thankful to be one of this awesome community. I am working right now on a object detection and planar localization project with the 6DOF robot UR10e. I have already detected the object using an Mask R-CNN approach, got the segmented part and obtained all image features for that purpose using OpenCV. Afterwards, I calculated the center(x,y) and angle in respect of z-axis. At the robot TCP will be a RGB-D camera(Azure kinect from Microsoft) installed for detection and tracking. Azure kinect has already a very useful ros driver to get calibration parameters via ros topics. https://github.com/microsoft/Azure_Kinect_ROS_Driver/blob/melodic/docs/usage.md

Here is a image frame from the object

My question is now , how can I transform the center coordinates (x,y) and the object orientation or angle from the image frame(see the picture above) to the azure camera coordinates for picking this object? I also assume the height between that object and the camera as known.

Amin
  • 11
  • 2

0 Answers0