3

I'm working on a way to detect the floor in an image. I'm trying to accomplish this by reducing the image to areas of color and then assuming that the largest area is the floor. (We get to make some pretty extensive assumptions about the environment the robot will operate in)

What I'm looking for is some recommendations on algorithms that would be suited to this problem. Any help would be greatly appreciated.

Edit: specifically I am looking for an image segmentation algorithm that can reliably extract one area. Everything I've tried (mainly PyrSegmentation) seems to work by reducing the image to N colors. This is causing false positives when the camera is looking at an empty area.

Milan
  • 1,743
  • 2
  • 13
  • 36
pkinsky
  • 1,718
  • 2
  • 23
  • 28
  • Could you elaborate on what exactly is wrong with the segmentation you are getting from `PyrSegmentation()`? – Michael Koval Jul 08 '11 at 05:31
  • it leaps around too much. When applied to the video feed from a webcam aimed at a white piece of paper with a dark object on it, it will occasionally work and occasionally split the white area into regions. I can fix this by increasing the second threshold but then when I remove the object it tries to split the blank paper up into regions. I might not be using it properly but it seems to be the wrong approach for this problem. I'm going to try a histogram segmentation based approach next. @Michael Koval – pkinsky Jul 08 '11 at 08:30
  • Over-segmentation is generally a very tricky problem to solve. In most cases, I have had better luck avoiding segmentation entirely or making the following stages robust to over-segmentation. – Michael Koval Jul 09 '11 at 07:11

2 Answers2

5

Since floor detection is the main aim, I'd say instead of segmenting by color, you could try separation by texture.

The Eigen transform paper describes a single-value descriptor of texture "roughness" using the average of eigenvalues over a grayscale window in the image/video frame. On pg. 78 of the paper they apply the mean-shift segmentation on the eigen-transform output image, effectively separating it into different textures.

Since your images are from a video feed, there can be a lot of variations in lighting so color segmentation might pose a few problems (unless you're working with HSV and other color spaces as mentioned above). The calculation of the eigenvalues is very simple and fast in OpenCV with the cvSVD() function.

AruniRC
  • 5,070
  • 7
  • 43
  • 73
4

If you can make the assumption about colour constancy your main issue is going to be changes in lighting that will throw off your colour detection. To that end, convert your input image to HSV, HSL, cie-Lab, YUV or some other luminance-separated colourspace and segment your image based on just the colour part (leave out the luminance value, V, L, L and Y in the examples above). This will help you overcome the obstacle of shadows and variations in lighting.

jilles de wit
  • 7,060
  • 3
  • 26
  • 50
  • My problem is mainly with the segmentation step. I can make it work on still images, or under one lighting condition, but as soon as that changes it starts to fail. – pkinsky Jul 07 '11 at 14:27
  • Yes, that is why you want to convert your iamge to a relatively lighting independent format. – jilles de wit Jul 08 '11 at 07:29
  • I'm using a sheet of white paper as a background right now. When I cast a shadow on it I get black spots on white within the shadow(hue displayed as grayscale, openCV). One possible solution I can think of is using a colored background, instead of a white one which could be represented as almost any hue with a very low saturation and high value. @jilles de wit – pkinsky Jul 08 '11 at 08:37
  • This could be a light colour issue. Using a distinctively coloured background rather than white could help. – jilles de wit Jul 08 '11 at 08:40