I have the following code, which I use to detect and match Keypoints detected in two images. I detect the keypoints, and store the descriptors in these:
cv::Mat descCurrent;
cv::Mat descCurrentR;
Then pass them to this function:
std::vector<DMatch> PointMatching::matchPointsOG2(cv::Mat descriptors1Cpu, cv::Mat descriptors2Cpu)
{
descriptors1GPU.upload(descriptors1Cpu);
descriptors2GPU.upload(descriptors2Cpu);
// matching descriptors
Ptr<cv::cuda::DescriptorMatcher> matcher = cv::cuda::DescriptorMatcher::createBFMatcher(NORM_HAMMING);
vector<cv::DMatch> matches;
vector< vector< DMatch> > knn_matches;
matcher->knnMatch(descriptors1GPU, descriptors2GPU, knn_matches, 2);
//Filter the matches using the ratio test
for (std::vector<std::vector<cv::DMatch> >::const_iterator it = knn_matches.begin(); it != knn_matches.end(); ++it) {
if (it->size() > 1 && (*it)[0].distance / (*it)[1].distance < 0.8) {
matches.push_back((*it)[0]);
}
}
return matches;
}
Then, once I have the matches, I do this:
std::vector<cv::KeyPoint> keyPntsGoodL;
std::vector<cv::KeyPoint> keyPntsGoodR;
for (size_t i = 0; i < matchR.size(); i++)
{
keyPntsGoodL.push_back(keyPntsCurrent[matchR[i].queryIdx]);
keyPntsGoodR.push_back(keyPntsCurrentR[matchR[i].trainIdx]);
}
This is all working as expected, I have the keypoints that have been matched and filtered. My question is:
How can I filter the descriptors in the same way?
If I now want to get only the subset of the initial descriptors (descCurrent) that match keyPntsGoodL
, how can I do this?
Thank you.