My current process goes like this
I used the k4arecorder that comes with the Azure-Kinect SDK v1.41 to record two MKV videos.
In the first, the person is in a T pose and in the second, they are doing some motion.
I then used the offline_processor from Microsoft/Azure-Kinect-Samples/body-tracking-samples to convert these two videos into JSON objects. For each frame the JSON contains the x, y, z positions where z is relative to the camera position and y+ is pointing downwards as well as the quaternion orientations for each joint
For the T-Pose json object, I extracted 1 frame of positions and rotations where the T-Pose was perfect. I then parsed this JSON object into two pandas dataframes of positions and orientations. The orientations were converted into euler angles.
For the second 'motion' Json object, I parsed that into two other pandas dataframes. In the first, each frame is a row and the columns are in the form
joint_1.x, joint_1.y, joint_1.z ... joint_n.x, joint_n.y, joint_n.z
In the orientation matrix, each row is also a frame and the columns are in the format
joint_1.z, joint_1.y, joint_1.x ... joint_n.z, joint_n.y, joint_n.x
What I want to know is this:
How can I go from these 4 matrices where all of the coordinates are in global space to a BVH file. I've tried a number of solutions but all have failed.
I'm missing some fundamental logic in this process and if anybody can help, I would really appreciate it. Any code solutions in any language are also appreciated