1

I'm trying to build a simple Augmented Reality application using OpenCV 4.1.1 and Aruco. The goal is to overlay an image on top of a marker but have the image go beyond the edges of the marker.

I have calibrated my camera and gotten the camera matrix and distortion coefficients. By using OpenCV's warpPerspective I can draw an image on top of a marker, but I can only tie it to the corners of the marker so it stays within the border of the marker.

    std::vector<int> ids;
    std::vector<std::vector<Point2f>> corners;

    // detect markers
    aruco::detectMarkers(image, dictionary, corners, ids);

if (ids.size() > 0) {
    // file with image to draw
    auto file = "square.png";
    // image to draw on the marker
    Mat im_src = imread(file);

    if (im_src.data == NULL) {
        std::cout << file << ": File not found\n" << std::endl;
        continue;
        }

        // flip(im_src, im_src, 1);

        // points of corners of the image
        std::vector<Point2f> pts_src;
        pts_src.push_back(Point2f(0, 0));
        pts_src.push_back(Point2f(im_src.cols-1, 0));
        pts_src.push_back(Point2f(im_src.cols-1, im_src.rows-1));
        pts_src.push_back(Point2f(0, im_src.rows-1));

        // use aruco marker
        for (int i = 0; i < ids.size(); i++) {
            if (ids[i] == 69) {
                aruco::drawDetectedMarkers(imageCopy, corners, ids);

                std::vector<Point> pts_dst;
                pts_dst.push_back(corners[i][0]);
                pts_dst.push_back(corners[i][1]);
                pts_dst.push_back(corners[i][2]);
                pts_dst.push_back(corners[i][3]);

                Mat h = findHomography(pts_src, pts_dst);

                Mat im_out;
                warpPerspective(im_src, im_out, h, imageCopy.size()); 
                fillConvexPoly(imageCopy, pts_dst, 0, 16);
                imageCopy = imageCopy + im_out; 
            }
        }

Here is an image of what I have and what I want. I think I need to use 3d points to draw the image but i'm not sure how to do that. Any help would be appreciated.

[1]

[2]

N3R4ZZuRR0
  • 2,400
  • 4
  • 18
  • 32
momentuuum
  • 23
  • 5
  • Is the actual length of the marker available beforehand? – Kani Oct 18 '19 at 12:21
  • I’m not sure. When I calibrated my camera the marker size was set to the default size of 50. I haven’t used the marker size before because I didn’t think it was needed. But I believe I could get the actual length. – momentuuum Oct 18 '19 at 18:38

1 Answers1

1

As you said in the comment, if the marker length is available, say l0, you can define the length of the desired square as l = l0 * 1.05 or something.

for (int i = 0; i < ids.size(); i++) {
    if (ids[i] == 69) {
        aruco::drawDetectedMarkers(imageCopy, corners, ids);

        // Estimate the pose of the marker
        std::vector<cv::Vec3d> rvecs, tvecs;
        cv::aruco::estimatePoseSingleMarkers(
            corners, l0, camera_matrix, dist_coeffs, 
            rvecs, tvecs
        );

        drawSquare(
            image_copy, camera_matrix, dist_coeffs, rvecs[i], tvecs[i], 
            l0
        );
    }
}


void drawSquare(
    cv::InputOutputArray image, cv::InputArray cameraMatrix, 
    cv::InputArray distCoeffs, cv::InputArray rvec, cv::InputArray tvec, 
    float l0
)
{
    float l = l0 * 1.05;  // new square is 5% larger than the aruco marker
    float half_l = l / 2.0;

    // Define the square on the camera frame (this is 3D since the camera
    // frame is 3D).
    std::vector<cv::Point3f> squarePoints;
    squarePoints.push_back(cv::Point3f(half_l, half_l, 0));
    squarePoints.push_back(cv::Point3f(half_l, -half_l, 0));
    squarePoints.push_back(cv::Point3f(-half_l, -half_l, 0));
    squarePoints.push_back(cv::Point3f(-half_l, half_l, 0));

    // Project the square to the image.
    std::vector<cv::Point2f> imagePoints;
    projectPoints(
        squarePoints, rvec, tvec, cameraMatrix, distCoeffs, imagePoints
    );

    // Draw the square on the image.
    cv::line(image, imagePoints[0], imagePoints[1], cv::Scalar(255, 0, 0), 3);
    cv::line(image, imagePoints[1], imagePoints[2], cv::Scalar(255, 0, 0), 3);
    cv::line(image, imagePoints[2], imagePoints[3], cv::Scalar(255, 0, 0), 3);
    cv::line(image, imagePoints[3], imagePoints[0], cv::Scalar(255, 0, 0), 3);
}

I did not test this, but I used a similar code for a different project. If you run into any issues, please let me know. I will update the above code.

Kani
  • 469
  • 4
  • 13
  • `l0` (marker edge length) is already required by `estimatePoseSingleMarkers` so it's not an actual precondition of _the additional step_ of calculating a larger square around the marker. -- in any case, the true marker length is immaterial to the result. it merely introduces a scale factor into `tvec` that has no bearing on the _projected_ points – Christoph Rackwitz Aug 08 '22 at 10:48
  • 1
    OP was not using `estimatePoseSingleMarker`, so `l0` *is* a precondition to the method I suggested. Also, what you are saying may be true if you are just doing a single image. However, if you are processing multiple images/videos, the knowledge of `l0` avoids having to guess the scaling factor on each image. – Kani Aug 09 '22 at 13:52