2

I am trying to find triangles (blue contours) and trapezoids (yellow contours) in real time. In general it's okay. enter image description here
But there is some problems. First it's a false positives. Triangles become trapezoids and vice versa. And I don't know how to how to solve this problem. enter image description here Second it's "noise". enter image description here. I tried to check area of the figure, but the noise can be equal to the area. So it did not help so much. The noise depends on the thresholding parameters. enter image description here cv::adaptiveThresholddoes not help at all. It's adds even more noise (and it so SLOW) erode and dilate cant fix it in a proper way enter image description here

And here is my code.

cv::Mat detect(cv::Mat imageRGB)
{
    //RGB -> GRAY
    cv::Mat imageGray;
    cv::cvtColor(imageRGB, imageGray, CV_BGR2GRAY);
    //Bluring it
    cv::Mat image;
    cv::GaussianBlur(imageGray, image, cv::Size(5,5), 2);
    //Thresholding
    cv::threshold(image, image, 100, 255, CV_THRESH_BINARY_INV);

    //SLOW and NOISE
    //cv::adaptiveThreshold(image, image, 255.0, CV_ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY, 21, 0);

    //Calculating canny params.
    cv::Scalar mu;
    cv::Scalar sigma;
    cv::meanStdDev(image, mu, sigma);

    cv::Mat imageCanny;

    cv::Canny(image,
              imageCanny,
              mu.val[0] + sigma.val[0],
              mu.val[0] - sigma.val[0]);

    //Detecting conturs.
    std::vector<std::vector<cv::Point> > contours;
    std::vector<cv::Vec4i> hierarchy;
    cv::findContours(imageCanny, contours, hierarchy,CV_RETR_TREE, CV_CHAIN_APPROX_NONE);

    //Hierarchy is not needed here so clear it.
    hierarchy.clear();

    for (std::size_t i = 0; i < contours.size(); i++)
    {
        //fitEllipse need at last 5 points.
        if (contours.at(i).size() < 5)
        {
            continue;
        }
        //Skip small contours.
        if (std::fabs(cv::contourArea(contours.at(i))) < 800.0)
        {
            continue;
        }
        //Calculating RotatedRect from contours NOT from hull
        //because fitEllipse need at last 5 points.

        cv::RotatedRect bEllipse = cv::fitEllipse(contours.at(i));

        //Finds the convex hull of a point set.
        std::vector<cv::Point> hull;
        cv::convexHull(contours.at(i), hull, true);
        //Approx it, so we'll get 3 point for triangles
        //and 4 points for trapez.
        cv::approxPolyDP(hull, hull, 15, true);
        //Is our contour convex. It's mast be.
        if (!cv::isContourConvex(hull))
        {
            continue;
        }
        //Triangle
        if (hull.size() == 3)
        {
            cv::drawContours(imageRGB, contours, i, cv::Scalar(255, 0, 0), 2);
            cv::circle(imageRGB, bEllipse.center, 3, cv::Scalar(0, 255, 0), 2);
        }
        //trapez
        if (hull.size() == 4)
        {
            cv::drawContours(imageRGB, contours, i, cv::Scalar(0, 255, 255), 2);
            cv::circle(imageRGB, bEllipse.center, 3, cv::Scalar(0, 0, 255), 2);
        }
    }
    return imageRGB;
}

So... In general all problems coused by wrong thresholding parameters, how can I calculete it in a proper way (automatically, of course)? And how can I can (lol, sorry for my english) prevent false positives?

  • Your last image shows only that the parameters for adaptive threshold are completely wrong. If you increase the value of the last parameter you will get better results. – Elmue Jan 10 '18 at 14:41

1 Answers1

0

Thesholding - i think that you should try Otsu binarization - here is some theory and a nice picture and here is documentation. This kind of thresholding generally is trying to find 2 most common values in image and use average value of them as a threshold value.

Alternatively consider using HSV color space, it might be easier to distinguish black and white regions from other regions. Another idea is to use inRange function (in RGB or in HSV color space - should work in woth situations) - you need to find 2 ranges (one from black regions and one for white) and search only for those regions (using inRange function) - look at this post.

Another way to accomplish this task might be using some library for blob extraction like this one or blob extractor which is part of OpenCV.

Distinguish triangle from trapezoid - i see 2 basic ways to improve you solution here:

  • in this line cv::approxPolyDP(hull, hull, 15, true); make third parameter (15 in this situation) not a constant value, but some part of contour area or length. Definitely it should adapt to contour size, it can't be just a canstant value. It's hard to say how to calculate it without some testing - try to start with 1-5% of contour area or length (i would start with length, but this is just my guess) and see whether this value is fine/to big/to small an check other values if needed. Unfortunetely there is no other way, but finding this equation manually shouldn't take very long time.
  • when you have 4 or 5 points calculate the equations of lines which join consecutive points (point 1 with point 2, point 2 with point 3, etc don't forget to calculate line between first point and last point), than check whether any 2 of those lines are parallel (or at least are close to being parallel - angle between them is close to 0 degress) - if you find any parallel lines than this contour is trapezoid, otherwise it's a triangle.
Community
  • 1
  • 1
cyriel
  • 3,522
  • 17
  • 34
  • Thank you very much for your advice. I've tried Otsu before, but there are were still some noise. HSV here is unacceptable because triangles can be white on blue, white on black, blue on white and so on (sorry It is my mistake I forgot to say about it). Also I've already tried to find parallel lines, calculate angles (180 for triangles, 360 for trapezoids), compare sides. There are can be noise withe the same parameters. (Idk how it's possible :( ). So... Can you give me some more infomation about how to calculate parameter for approxPolyDP? (15 was just best). And thank you again. –  Mar 21 '15 at 09:25
  • See edited answer. "Also I've already tried to find parallel lines, calculate angles (180 for triangles, 360 for trapezoids), compare sides. There are can be noise withe the same parameters. (Idk how it's possible :( )" - boise is something absolutely normal, you just need to learn how to find the boundary betweend nosie and usefull information :) Try to measure angles with some tollerance - for example if angle is in range (165, 195) degrees treat it as 180 deg, etc. – cyriel Mar 21 '15 at 10:54
  • OTSU is definitely the wrong answer. OTSU takes an average of the ENTIRE image to calculate ONE threshold for the ENTIRE image. This will NOT work for images that come from a camera where the lighting may be different in different parts of the image. YouDoltWrong was on the right way with using an adaptive threshold (but with wrong parameters) and now you are directing him into the wrong direction. – Elmue Jan 10 '18 at 14:38
  • @Elmue OTSU does not take an average, the value is taken from between 2 highest peaks of the histogram (sorta). OTSU will work for high contrast scenarios even if the lightning conditions are not great. For real-time applications requiring many samplings per second, the adaptativeThresholding is definitely not a way to go though. – Quest Mar 20 '22 at 20:26
  • @Quest. I have tested both: fix and adaptive thresholds. If you just have a black and white image like for example a chessboard with some symbols on a white paper and hold that into the camera you will get unusable results with a fix threshold while with an adaptive threshold you get very useful results. – Elmue Mar 21 '22 at 23:25
  • @Elmue I think we can both agree that this is the expected result. I was refering to the point that OTSU will work for scenarios where the foreground is is in high contrast to the background, so that the thresholding value can be easily and accurately calculated from the histogram. Take a look at the following example, where the lightning conditions are not ideal, due to reflective background on the left, but the high contrast makes it possible to distinguish between these two: https://ibb.co/1dR10Jh – Quest Mar 23 '22 at 22:01