This library seems to get your job done
https://github.com/yinguobing/head-pose-estimation
It seems to work in 3 steps,
- Face Detection
- Landmark Detection
- Pose Estimation. After getting 68 facial landmarks, the pose could be calculated by a mutual PnP algorithm.
So, you should look through the codebase for this mutual PnP algorithm
that is taking the 68 landmarks as an input.
Looks like all the code regarding pose estimation that you will need is inside this file!
To briefly summarize the whole file of pose estimation code for you
- the
PoseEstimator
class contains all the relevant functions
- the
__init__
function is establishing self.model_points
as the nose tip, chin, mouth coordinates. Its also handling camera internals, rotation and translation vectors. All of these are essential inputs to the PnP algorithm that is doing pose estimation
self._get_full_model_points()
is reading the landmark points from a file and storing it in self.model_points_68
- this class has the important function called
solve_pose_by_68_points
which is doing what you need. It is expecting an input of your landmarks.
- this
solve_pose_by_68_points()
function is used inside this file
- the input to that function is a variable called
marks
You need to figure out a way to feed your own landmark points into this solve_pose_by_68_points
function, using the marks
variable as a guide.
Or, just recreate the code from the pose_estimator.py
file and the solve_pose_by_68_points
function within it.
Side note - that function is simply calling cv2.solvePnP
, openCVs implementation of the PnP algorithm I'm guessing.