0

I am trying to align a set of points forming a box to an opencv 9*6 calibration chessboard with an Intel RealSense D435 3D camera. I'm trying to modify one of the exemples of the Python SDK to do so, which has a Kabsch algorithm transformation function already inside. The example is the one to add bounding boxes to objects.

My problem is that whenever I try to align my points (which are calculated using an existing corner of the chessboard as a starting point and adding the length, width and height of the box) using this function, there is always a difference to where it should be.

My code to transform the coordinates of my points is basically

point_b = transformation_devices[device].inverse().apply_transformation(np.asarray([[(x + width)],[y], [z]]))
#used to calculate the position of the point and directly transform it, the apply_transformation() function is the built-in transformation function of the example

b_x, b_y = rs.rs2_project_point_to_pixel(intrinsics, b)
#to return to 2D coordinates and display the box using opencv

Do you see any blatantly obvious error that I missed in my method ? Is there a better way to try to align 3D points to a chessboard ?

Edit : What I'm trying to achieve is to align the purple box in the image to the chessboard, a bit like the green one

Image

Ozuhan
  • 1
  • 3
  • 1
    not exactly sure what you are doing, but lens distortion could be the reasin why the computed positions differ from the real position – Micka Dec 10 '18 at 08:16
  • What I'm trying to do is just display a box aligned to a chessboard, the problem here is to align the box. I think that this might not be the problem since the exemple I'm based on can quite accurately determine the position and size of an object. But I'll check it anyway just in case. Thanks for the tip. – Ozuhan Dec 10 '18 at 08:23

0 Answers0