0

I am trying to implement something similar to this using openCV

https://mathematica.stackexchange.com/questions/19546/image-processing-floor-plan-detecting-rooms-borders-area-and-room-names-t

However, I am running into some walls (probably due to my own ignorance in working with OpenCV).

When I try to perform a distance transform on my image, I am not getting the expected result at all.

This is the original image I am working with

enter image description here

This is the image I get after doing some cleanup with opencv

enter image description here

This is the wierdness I get after trying to run a distance transform on the above image. My understanding is that this should look more like a blurry heatmap. If I follow the opencv example passed this point and try to run a threshold to find the distance peaks, I get nothing but a black image. enter image description here

This is the code thus far that I have cobbled together using a few different opencv examples

    cv::Mat outerBox = cv::Mat(matImage.size(), CV_8UC1);
cv::Mat kernel = (cv::Mat_<uchar>(3,3) << 0,1,0,1,1,1,0,1,0);

for(int x = 0; x < 3; x++) {
    cv::GaussianBlur(matImage, matImage, cv::Size(11,11), 0);
    cv::adaptiveThreshold(matImage, outerBox, 255, cv::ADAPTIVE_THRESH_MEAN_C, cv::THRESH_BINARY, 5, 2);
    cv::bitwise_not(outerBox, outerBox);

    cv::dilate(outerBox, outerBox, kernel);
    cv::dilate(outerBox, outerBox, kernel);

    removeBlobs(outerBox, 1);

    erode(outerBox, outerBox, kernel);
}

cv::Mat dist;
cv::bitwise_not(outerBox, outerBox);
distanceTransform(outerBox, dist, cv::DIST_L2, 5);
// Normalize the distance image for range = {0.0, 1.0}
// so we can visualize and threshold it
normalize(dist, dist, 0, 1., cv::NORM_MINMAX);


    //using a threshold at this point like the opencv example shows to find peaks just returns a black image right now
//threshold(dist, dist, .4, 1., CV_THRESH_BINARY);
//cv::Mat kernel1 = cv::Mat::ones(3, 3, CV_8UC1);
//dilate(dist, dist, kernel1);


self.mainImage.image = [UIImage fromCVMat:outerBox];

void removeBlobs(cv::Mat &outerBox, int iterations) {
int count=0;
int max=-1;

cv::Point maxPt;

for(int iteration = 0; iteration < iterations; iteration++) {

    for(int y=0;y<outerBox.size().height;y++)
    {
        uchar *row = outerBox.ptr(y);
        for(int x=0;x<outerBox.size().width;x++)
        {
            if(row[x]>=128)
            {

                int area = floodFill(outerBox, cv::Point(x,y), CV_RGB(0,0,64));

                if(area>max)
                {
                    maxPt = cv::Point(x,y);
                    max = area;
                }
            }
        }

    }

    floodFill(outerBox, maxPt, CV_RGB(255,255,255));
    for(int y=0;y<outerBox.size().height;y++)
    {
        uchar *row = outerBox.ptr(y);
        for(int x=0;x<outerBox.size().width;x++)
        {
            if(row[x]==64 && x!=maxPt.x && y!=maxPt.y)
            {
                int area = floodFill(outerBox, cv::Point(x,y), CV_RGB(0,0,0));
            }
        }
    }
}
}

I've been banging my head on this for a few hours and I am totally stuck in the mud on it, so any help would be greatly appreciated. This is a little bit out of my depth, and I feel like I am probably just making some basic mistake somewhere without realizing it.

EDIT: Using the same code as above running OpenCV for Mac (not iOS) I get the expected results enter image description here

This seems to indicate that the issue is with the Mat -> UIImage bridging that OpenCV suggests using. I am going to push forward using the Mac library to test my code, but it would sure be nice to be able to get consistent results from the UIImage bridging as well.

    + (UIImage*)fromCVMat:(const cv::Mat&)cvMat
{
    // (1) Construct the correct color space
    CGColorSpaceRef colorSpace;
    if ( cvMat.channels() == 1 ) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    // (2) Create image data reference
    CFDataRef data = CFDataCreate(kCFAllocatorDefault, cvMat.data, (cvMat.elemSize() * cvMat.total()));

    // (3) Create CGImage from cv::Mat container
    CGDataProviderRef provider = CGDataProviderCreateWithCFData(data);
    CGImageRef imageRef = CGImageCreate(cvMat.cols,
                                        cvMat.rows,
                                        8,
                                        8 * cvMat.elemSize(),
                                        cvMat.step[0],
                                        colorSpace,
                                        kCGImageAlphaNone | kCGBitmapByteOrderDefault,
                                        provider,
                                        NULL,
                                        false,
                                        kCGRenderingIntentDefault);

    // (4) Create UIImage from CGImage
    UIImage * finalImage = [UIImage imageWithCGImage:imageRef];

    // (5) Release the references
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CFRelease(data);
    CGColorSpaceRelease(colorSpace);

    // (6) Return the UIImage instance
    return finalImage;
}
Community
  • 1
  • 1
bojangles
  • 21
  • 3
  • Have you tried changing the background to black instead of white in the cleaned up floor plan, before doing the distance transform? – Rick M. Feb 28 '17 at 08:52
  • The images from your question have disappeared. – Michał Gacka Feb 28 '17 at 08:55
  • @m3h0w it seems imgur is having capacity issues. The images come and go for me as well. – bojangles Feb 28 '17 at 17:43
  • @RickM. Please check my updates to the problem. It seems the openCV code is working, but the iOS UIImage bridging isn't showing consistent images with what imshow outputs using the Mac library. – bojangles Feb 28 '17 at 17:43
  • @bojangles sorry mate, unfortunately I have no experience with UIImage bridging. But happy to know that your original code was working! Good Luck! – Rick M. Feb 28 '17 at 18:12

1 Answers1

1

I worked out distance transform in OpenCV using python and I was able to obtain this:

enter image description here

You stated "I get nothing but a black image". Well I faced the same problem initially, until I converted the image to type int using: np.uint8(dist_transform)

I did something extra as well (you might/might not need it). In order to segment the rooms to a certain extent, I performed threshold on the distance transformed image. I got this as a result:

enter image description here

Jeru Luke
  • 20,118
  • 13
  • 80
  • 87
  • 1
    Thanks for the input! I fired up the mac version of openCV and copied my code over, and I get very similar results when displaying the images with imshow. However, I still get the wonky output on the iOS version displaying a UIImage generated using OpenCV's provided bridging between Mat and UIImage. The good news is that this means my code has probably been working this whole time, the bad news is that I probably just need to work on this using the Mac version of the library for testing, which is frustrating. – bojangles Feb 28 '17 at 17:33