8

What I'm trying to do is measure the thickness of the eyeglasses frames. I had the idea to measure the thickness of the frame's contours (may be a better way?). I have so far outlined the frame of the glasses, but there are gaps where the lines don't meet. I thought about using HoughLinesP, but I'm not sure if this is what I need.

So far I have conducted the following steps:

  • Convert image to grayscale
  • Create ROI around the eye/glasses area
  • Blur the image
  • Dilate the image (have done this to remove any thin framed glasses)
  • Conduct Canny edge detection
  • Found contours

These are the results:

This is my code so far:

//convert to grayscale
cv::Mat grayscaleImg;
cv::cvtColor( img, grayscaleImg, CV_BGR2GRAY );

//create ROI
cv::Mat eyeAreaROI(grayscaleImg, centreEyesRect);
cv::imshow("roi", eyeAreaROI);

//blur
cv::Mat blurredROI;
cv::blur(eyeAreaROI, blurredROI, Size(3,3));
cv::imshow("blurred", blurredROI);

//dilate thin lines
cv::Mat dilated_dst;
int dilate_elem = 0;
int dilate_size = 1;
int dilate_type = MORPH_RECT;

cv::Mat element = getStructuringElement(dilate_type, 
    cv::Size(2*dilate_size + 1, 2*dilate_size+1), 
    cv::Point(dilate_size, dilate_size));

cv::dilate(blurredROI, dilated_dst, element);
cv::imshow("dilate", dilated_dst);

//edge detection
int lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;    

cv::Canny(dilated_dst, dilated_dst, lowThreshold, lowThreshold*ratio, kernel_size);

//create matrix of the same type and size as ROI
Mat dst;
dst.create(eyeAreaROI.size(), dilated_dst.type());
dst = Scalar::all(0);

dilated_dst.copyTo(dst, dilated_dst);
cv::imshow("edges", dst);

//join the lines and fill in
vector<Vec4i> hierarchy;
vector<vector<Point>> contours;

cv::findContours(dilated_dst, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::imshow("contours", dilated_dst);

I'm not entirely sure what the next steps would be, or as I said above, if I should use HoughLinesP and how to implement it. Any help is very much appreciated!

LKB
  • 1,020
  • 5
  • 22
  • 46
  • 1
    Have you considered segmentation? By any means necessary, separate your pixels into two groups: (1) glasses belonging pixels (2)non belonging to glasses pixels. Use super pixels notion: each pixel should have various characteristics: color, position, if they belong to any contour you've already found, if they are on edges, etc. – LovaBill Oct 17 '14 at 09:18
  • 1
    i think your contours arent well because there are some gaps. Try to dilate your canny results before contour extraction and verify your contours by drawing them filled on a new image. If contours are extracted correctly you could compute the distance transform from inverted filled contour. frame thickness could maybe be approximated by the maximum found distance*2. – Micka Oct 17 '14 at 09:27
  • 1
    Hi @William, thanks for the reply! I did think about performing skin detection and segmenting from there. Also have been looking into probable positioning and the like. I'm not sure how to go about detecting which pixels belong to what, but will look into it. – LKB Oct 17 '14 at 09:31
  • 1
    Hi @Micka, thank you, too, for your reply! Very helpful suggestion!! I'll include another dilation after Canny and proceed from there. Cheers! – LKB Oct 17 '14 at 09:31
  • @Micka - just added in another dilation after Canny and already results have improved: http://i.imgur.com/G4DwAA6.png – LKB Oct 17 '14 at 09:37
  • @LBran: can you provide sample images without window bars (e.g. use `cv::imwrite("edges.png",dilated_dst)` directly after your canny)? – Micka Oct 17 '14 at 09:48
  • @Micka - Absolutely, please see the imgur album: http://imgur.com/a/SosQz – LKB Oct 17 '14 at 10:16
  • 1
    great, thank you. If I can find the time I'll play with the problem :) – Micka Oct 17 '14 at 10:26
  • Oh wow, thank you @Micka! I'll be messing around with this for the next week or two, so if I make any progress, I'll post it back here. – LKB Oct 17 '14 at 10:30
  • 1
    In the given example image, it'll make the frame thinner if you dilate, because dilation takes the maximum, but your frame pixels are darker than the background. May be that's why your edges are breaking. If you erode it instead, you might get a better result for the example. – dhanushka Oct 17 '14 at 10:44
  • Thanks for the advice, @dhanushka - will switch dilate to erode and see what happens! – LKB Oct 17 '14 at 10:46

2 Answers2

4

I think there are 2 main problems.

  1. segment the glasses frame

  2. find the thickness of the segmented frame

I'll now post a way to segment the glasses of your sample image. Maybe this method will work for different images too, but you'll probably have to adjust parameters, or you might be able to use the main ideas.

Main idea is: First, find the biggest contour in the image, which should be the glasses. Second, find the two biggest contours within the previous found biggest contour, which should be the glasses within the frame!

I use this image as input (which should be your blurred but not dilated image):

enter image description here

// this functions finds the biggest X contours. Probably there are faster ways, but it should work...
std::vector<std::vector<cv::Point>> findBiggestContours(std::vector<std::vector<cv::Point>> contours, int amount)
{
    std::vector<std::vector<cv::Point>> sortedContours;

    if(amount <= 0) amount = contours.size();
    if(amount > contours.size()) amount = contours.size();

    for(int chosen = 0; chosen < amount; )
    {
        double biggestContourArea = 0;
        int biggestContourID = -1;
        for(unsigned int i=0; i<contours.size() && contours.size(); ++i)
        {
            double tmpArea = cv::contourArea(contours[i]);
            if(tmpArea > biggestContourArea)
            {
                biggestContourArea = tmpArea;
                biggestContourID = i;
            }
        }

        if(biggestContourID >= 0)
        {
            //std::cout << "found area: " << biggestContourArea << std::endl;
            // found biggest contour
            // add contour to sorted contours vector:
            sortedContours.push_back(contours[biggestContourID]);
            chosen++;
            // remove biggest contour from original vector:
            contours[biggestContourID] = contours.back();
            contours.pop_back();
        }
        else
        {
            // should never happen except for broken contours with size 0?!?
            return sortedContours;
        }

    }

    return sortedContours;
}

int main()
{
    cv::Mat input = cv::imread("../Data/glass2.png", CV_LOAD_IMAGE_GRAYSCALE);
    cv::Mat inputColors = cv::imread("../Data/glass2.png"); // used for displaying later
    cv::imshow("input", input);

    //edge detection
    int lowThreshold = 100;
    int ratio = 3;
    int kernel_size = 3;    

    cv::Mat canny;
    cv::Canny(input, canny, lowThreshold, lowThreshold*ratio, kernel_size);
    cv::imshow("canny", canny);

    // close gaps with "close operator"
    cv::Mat mask = canny.clone();
    cv::dilate(mask,mask,cv::Mat());
    cv::dilate(mask,mask,cv::Mat());
    cv::dilate(mask,mask,cv::Mat());
    cv::erode(mask,mask,cv::Mat());
    cv::erode(mask,mask,cv::Mat());
    cv::erode(mask,mask,cv::Mat());

    cv::imshow("closed mask",mask);

    // extract outermost contour
    std::vector<cv::Vec4i> hierarchy;
    std::vector<std::vector<cv::Point>> contours;
    //cv::findContours(mask, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
    cv::findContours(mask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);


    // find biggest contour which should be the outer contour of the frame
    std::vector<std::vector<cv::Point>> biggestContour;
    biggestContour = findBiggestContours(contours,1); // find the one biggest contour
    if(biggestContour.size() < 1)
    {
        std::cout << "Error: no outer frame of glasses found" << std::endl;
        return 1;
    }

    // draw contour on an empty image
    cv::Mat outerFrame = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
    cv::drawContours(outerFrame,biggestContour,0,cv::Scalar(255),-1);
    cv::imshow("outer frame border", outerFrame);

    // now find the glasses which should be the outer contours within the frame. therefore erode the outer border ;)
    cv::Mat glassesMask = outerFrame.clone();
    cv::erode(glassesMask,glassesMask, cv::Mat());
    cv::imshow("eroded outer",glassesMask);

    // after erosion if we dilate, it's an Open-Operator which can be used to clean the image.
    cv::Mat cleanedOuter;
    cv::dilate(glassesMask,cleanedOuter, cv::Mat());
    cv::imshow("cleaned outer",cleanedOuter);


    // use the outer frame mask as a mask for copying canny edges. The result should be the inner edges inside the frame only
    cv::Mat glassesInner;
    canny.copyTo(glassesInner, glassesMask);

    // there is small gap in the contour which unfortunately cant be closed with a closing operator...
    cv::dilate(glassesInner, glassesInner, cv::Mat());
    //cv::erode(glassesInner, glassesInner, cv::Mat());
    // this part was cheated... in fact we would like to erode directly after dilation to not modify the thickness but just close small gaps.
    cv::imshow("innerCanny", glassesInner);


    // extract contours from within the frame
    std::vector<cv::Vec4i> hierarchyInner;
    std::vector<std::vector<cv::Point>> contoursInner;
    //cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
    cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);

    // find the two biggest contours which should be the glasses within the frame
    std::vector<std::vector<cv::Point>> biggestInnerContours;
    biggestInnerContours = findBiggestContours(contoursInner,2); // find the one biggest contour
    if(biggestInnerContours.size() < 1)
    {
        std::cout << "Error: no inner frames of glasses found" << std::endl;
        return 1;
    }

    // draw the 2 biggest contours which should be the inner glasses
    cv::Mat innerGlasses = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
    for(unsigned int i=0; i<biggestInnerContours.size(); ++i)
        cv::drawContours(innerGlasses,biggestInnerContours,i,cv::Scalar(255),-1);

    cv::imshow("inner frame border", innerGlasses);

    // since we dilated earlier and didnt erode quite afterwards, we have to erode here... this is a bit of cheating :-(
    cv::erode(innerGlasses,innerGlasses,cv::Mat() );

    // remove the inner glasses from the frame mask
    cv::Mat fullGlassesMask = cleanedOuter - innerGlasses;
    cv::imshow("complete glasses mask", fullGlassesMask);

    // color code the result to get an impression of segmentation quality
    cv::Mat outputColors1 = inputColors.clone();
    cv::Mat outputColors2 = inputColors.clone();
    for(int y=0; y<fullGlassesMask.rows; ++y)
        for(int x=0; x<fullGlassesMask.cols; ++x)
        {
            if(!fullGlassesMask.at<unsigned char>(y,x))
                outputColors1.at<cv::Vec3b>(y,x)[1] = 255;
            else
                outputColors2.at<cv::Vec3b>(y,x)[1] = 255;

        }

    cv::imshow("output", outputColors1);

    /*
    cv::imwrite("../Data/Output/face_colored.png", outputColors1);
    cv::imwrite("../Data/Output/glasses_colored.png", outputColors2);
    cv::imwrite("../Data/Output/glasses_fullMask.png", fullGlassesMask);
    */

    cv::waitKey(-1);
    return 0;
}

I get this result for segmentation:

enter image description here

the overlay in original image will give you an impression of quality:

enter image description here

and inverse:

enter image description here

There are some tricky parts in the code and it's not tidied up yet. I hope it's understandable.

The next step would be to compute the thickness of the the segmented frame. My suggestion is to compute the distance transform of the inversed mask. From this you will want to compute a ridge detection or skeletonize the mask to find the ridge. After that use the median value of ridge distances.

Anyways I hope this posting can help you a little, although it's not a solution yet.

Micka
  • 19,585
  • 4
  • 56
  • 74
  • Hi Micka, thank you an unbelievable amount for taking the time out to help me. I've run your code and have the following output: http://i.imgur.com/aNnXOlq.png It's a little different to yours (how would that happen?), i.e. one of the inner glass contours hasn't closed. Any idea how I'd close this up? I'll have a look around the web and play with the code in the mean time, see if I can fix it up. – LKB Oct 20 '14 at 03:42
  • 1
    Oops, forgot to blur the image first. :) – LKB Oct 20 '14 at 09:24
  • 1
    Beware that you might have similar problems for different images! – Micka Oct 20 '14 at 10:28
  • 1
    I'll write a 2nd answer for extracting the thickness if the segmentation is given, when I find the time! – Micka Oct 20 '14 at 10:29
  • 1
    And keep in mind that you could add some heuristic for testing whether the segmentation is correct. The outer contour should cover some big part of the upper-face-image while the inner glasses should cover a big part of the frame contour and both glasses should have very similar size! – Micka Oct 20 '14 at 10:33
  • Yeah I've begun testing with other images with thick frames, I'll adjust some things and see where that gets me. :) Absolutely @ segmentation heuristic. Thank you again for all your help, Micka! – LKB Oct 20 '14 at 10:44
  • Hey Micka, I've conducted a distance transform on the mask (http://i.imgur.com/JlVvHQo.jpg), but not sure what you mean by compute a ridge detection of the skeletonized mask. Any advice? Thanks! – LKB Oct 30 '14 at 04:55
  • @LBran sorry, didnt see your comment in Octobre... there are two methods: either skeletonize the mask OR use ridge detection on distance transform, which both should give similar results. Had a typo in the text, I ment 'or' not 'of' =) – Micka Dec 29 '14 at 07:39
1

Depending on lighting, frame color etc this may or may not work but how about simple color detection to separate the frame ? Frame color will usually be a lot darker than human skin. You'll end up with a binary image (just black and white) and by calculating the number (area) of black pixels you get the area of the frame.

Another possible way is to get better edge detection, by adjusting/dilating/eroding/both until you get better contours. You will also need to differentiate the contour from the lenses and then apply cvContourArea.

  • Thanks for your reply, Sonny! I'm not so sure on the colour detection, but I can give it a go! I think your latter suggestion of fine tuning contour detection may work out better, so I'll see how that goes, too. – LKB Oct 17 '14 at 08:21