I'm using the Kinect and the OpenNI library to track a user's hands.
As far as I can see, there are two ways to do this; either using the HandsGenerator and tracking each hand separately, or using UserGenerator, and then asking for the hand positions using GetSkeletonJoint
and XN_SKEL_LEFT_HAND
/XN_SKEL_RIGHT_HAND
.
For various reasons, it'd be much more convenient if I could just use the UserGenerator
, but the coordinates it gives me for the two hands are extremely jittery to the point of being unusable, even when setting a high smoothing value. In comparison, the coordinates given by HandsGenerator
are very precise and stable.
How come the precision of the two methods is so different, and is there anything I can do to improve the precision of the coordinates given by the UserGenerator
method?