I have a quite simple task: get Euler angles of a planar triangle. Pattern looks like this
So, the algorithm:
1) get image from webcam -- done
2) convert to gryascale, filter, etc. -- done
3) get mass centers of all connected components, and filter them -- done. Looks like this. Red circles describe mass centers of triangle vertices.
Code is quite simple, but here it is:
QMap<int, QVector<double> > massCenters(const cv::Mat& image)
{
cv::Mat output(IMAGE_WIDTH, IMAGE_HEIGHT, CV_32S);
cv::connectedComponents(image, output, 8);
QMap<int, QVector<double> > result;
for (int y = 0; y < IMAGE_HEIGHT; ++y)
{
for (int x = 0; x < IMAGE_WIDTH; ++x)
{
int label = output.at<int>(y, x);
if (label)
{
QVector<double> vec = result.value(label, QVector<double>());
if (vec.isEmpty())
{
vec.resize(3);
vec.fill(0);
}
vec[0] += x;
vec[1] += y;
vec[2] += 1;
result[label] = vec;
}
}
}
return result;
}
4) then I call solvePNP to get rotation & translation vectors
cv::solvePnP(m_origin, m_imagePoints, m_cameraMatrix, m_distMatrix, m_rvec, m_tvec);
//this code is for drawing rvec & tvec on a screen
std::vector<cv::Point3f> axis;
vector<cv::Point2f> axis2D;
axis.push_back(Point3f(0.0f, 0.0f, 0.0f));
axis.push_back(Point3f(30.0f, 0.0f, 0.0f));
axis.push_back(Point3f(0.0f, 30.0f, 0.0f));
axis.push_back(Point3f(0.0f, 0.0f, 30.0f));
cv::projectPoints(axis, m_rvec, m_tvec, m_cameraMatrix, m_distMatrix, axis2D);
cv::line(m_orig, axis2D[0], axis2D[1], cv::Scalar(255, 0, 0), 2);
cv::line(m_orig, axis2D[0], axis2D[2], cv::Scalar(0, 255, 0), 2);
cv::line(m_orig, axis2D[0], axis2D[3], cv::Scalar(0, 0, 255), 2);
m_origin is declared as std::vector<cv::Point3f> m_origin;
and filled with values (measured in mm)
m_origin.push_back(cv::Point3f( 0.0f, 51.0f, 0.0f));
m_origin.push_back(cv::Point3f(-56.0f, -26.0f, 0.0f));
m_origin.push_back(cv::Point3f( 56.0f, -26.0f, 0.0f));
m_imagePoints is declared as std::vector<cv::Point2f> m_imagePoints;
and contains pixel coordinates of mass centers (red circles on the second screen).
And I get quite strange results: from this to that
What have I tried and it didn't help me:
1) used double and float types in m_cameraMatrix, m_distMatrix, m_rvec, m_tvec
2) rearranged points in m_origin
3) played with solvePnPRansac and it's input parameters
4) played with pnp methods: iterative and epnp
5) useExtrinsicGuess=true -- it helps, but sometimes solution is "stuck" and gives completely wrong values (thousand degrees in rotation vector)
And I have several questions:
1) does order of origin and image points matter? As mentioned here, sometime it does, but it was a year ago.
2) can my task be solved in some other way, than using solvePnP?
Thanks. Would appreciate any help!