I'm trying to draw a 3D torso object through information extracted by Google's ML pose detection API.
Google's pose detection API recognizes a person's body in a picture, still image, and video stream and extracts information about predefined landmarks in real time. The extracted landmarks information consists of the (x,y) value of the pixel coordinate system based on the corresponding image cut and the z value predicted through pose detection. But I can't use it directly as a 3D coordinate system because (x, y) and z are values of different coordinate systems.
So please explain the process of making (x', y', z') that can be used in any 3D space through (x, y) and z values.
FYI, In the extracted landmarks, the side facing the camera has a negative z value relative to the midpoint of the line connecting the left and right hips, and vice versa, if it is away from the camera, it has a positive z value.
Thanks for your help.
I have no idea for it.