The main goal is to display the grasp received from a ur10 robot in a pointcoud using open3d. The open3d is used to depict an object as pointclouds and we want to show a grasp skeleton (the idea of grasp skeleton was taken from the s4g repository) around it. Currently, open3d shows us the pointcloud of the object but the grasp shown is at the wrong orientation.
The ur10 gives us the following output regarding its pose:
position= [0.015238827715672282, 0.026318999049784485, 0.231598307617698]
quats = [0.7125063395828215, -0.678258641135711, -0.17649390437631773, 0.03390919487441209]
As of now, we are tried to convert 'quats' into oritentation by doing the following: orientation = quat2mat(quats)> We then load the pointcloud and the gripper skeleton, but the gripper always seems to be in a wrong place. there seems to be something wrong in the rotation of the frames.> The rest of the code is as follows>
type here
pc = o3d.io.read_point_cloud("/home/max_j/Pictures/full_pcd/obj1.pcd")
vis = o3d.visualization.VisualizerWithEditing()
vis.create_window()
vis.add_geometry(pc)
vis.run()
vis.destroy_window()
vis_list = [pc]
# Gripper Configuration
BACK_COLLISION_MARGIN = 0.0 # points that collide with back hand within this range will not be detected
HALF_BOTTOM_WIDTH = 0.10
BOTTOM_LENGTH = 0.0025
FINGER_WIDTH = 0.0003
HALF_HAND_THICKNESS = 0.0003
FINGER_LENGTH = 0.20
HAND_LENGTH = BOTTOM_LENGTH + FINGER_LENGTH
HALF_BOTTOM_SPACE = HALF_BOTTOM_WIDTH - FINGER_WIDTH
create_box = o3d.geometry.TriangleMesh.create_box
back_hand = create_box(height=2 * HALF_BOTTOM_WIDTH,
depth=HALF_HAND_THICKNESS * 2,
width=BOTTOM_LENGTH - BACK_COLLISION_MARGIN)
temp_trans = np.eye(4)
temp_trans[0, 3] = -BOTTOM_LENGTH
temp_trans[1, 3] = -HALF_BOTTOM_WIDTH
temp_trans[2, 3] = -HALF_HAND_THICKNESS
back_hand.transform(temp_trans)
finger = create_box((FINGER_LENGTH + BACK_COLLISION_MARGIN),
FINGER_WIDTH,
HALF_HAND_THICKNESS * 2)
color=(0.1, 0.6, 0.3)
finger.paint_uniform_color(color)
back_hand.paint_uniform_color(color)
left_finger = copy.deepcopy(finger)
temp_trans = np.eye(4)
temp_trans[1, 3] = HALF_BOTTOM_SPACE
temp_trans[2, 3] = -HALF_HAND_THICKNESS
temp_trans[0, 3] = -BACK_COLLISION_MARGIN
left_finger.transform(temp_trans)
temp_trans[1, 3] = -HALF_BOTTOM_WIDTH
finger.transform(temp_trans)
hmt = np.eye(4) #if we use R to make hmt
hmt[:3,:3] = orientation
hmt[:3,3] = position
T_global_to_local = hmt
print("the global to local before correction is: \n", T_global_to_local)
#T_local_to_global = np.linalg.inv(T_global_to_local)
coord_frame_hand = o3d.geometry.TriangleMesh.create_coordinate_frame(size=0.1)
coord_frame_hand = coord_frame_hand.transform(T_global_to_local)
# o3d.visualization.draw_geometries([coord_frame_hand_, coord_frame_hand_])
back_hand.transform(T_global_to_local)
finger.transform(T_global_to_local)
left_finger.transform(T_global_to_local)
vis_list_ = [back_hand, left_finger, finger,coord_frame_hand]
hand = vis_list_
o3d.visualization.draw_geometries(hand)
vis_list = vis_list + hand
o3d.visualization.draw_geometries(vis_list)