3

I'm working on a project where I have to automatically segment different parts of a car (i.e. door, headlight, etc.) in an image provided by a cam.

In a first step I'd like to remove the background, so the algorithm won't find anything where it's not supposed to.

I also have the image of just the background, but the illumination is very different due to exposure time, reflection of light of the car, etc.

I tried to get rid of the BG by simple subtraction, unfortunately due to the very different lighting conditions this didn't turn out to be very helpful.

So next I applied a histogram equalization, but this also didn't help very much.

How can I get rid of the background in this differently lighted scene? Is there a OpenCV method that I could use with these two images?

genpfault
  • 51,148
  • 11
  • 85
  • 139
Bubsy Bobcat
  • 41
  • 2
  • 5

4 Answers4

2

Opencv has three different methods for background subtraction:

BackgroundSubtractorGMG  bs_gmg;
BackgroundSubtractorMOG  bs_mog;
BackgroundSubtractorMOG2 bs_mog2;

Mat foreground_gmg;
bs_gmg  ( image,  foreground_gmg, -1.0 );
Mat foreground_mog;
bs_mog  ( image,  foreground_mog, -1.0 );
Mat foreground_mog2;
bs_mog2 ( image, foreground_mog2, -1.0 );

You can read about them and use the one that works best for you.

Safir
  • 902
  • 7
  • 9
  • Hi Safir, thanks for your input. Am I right in assuming that this will only work with several images and not with only 2 (background and background+foreground) ? The examples I have found rely on video streams, i.e. multiple input images. – Bubsy Bobcat Apr 30 '13 at 07:55
  • Background has meaning in sense of video streams. Otherwise how do you know if something is background or not. In those algorithms they learn what moves and therefore is foreground, and what doesn't move and therefore is background. If you have a static background, I think with subtraction you must be able to remove it from any arbitrary image. – Safir Apr 30 '13 at 09:23
1

My experience suggests that the illumination conditions can have so much variation, two images are simply not enough. You started with a pixel-based approach, making a simple pixel-by-pixel subtraction of the two images, but the illumination changes make the colors appear very different, even in HSV spaces. This is a case of the aperture problem, one of the most basic difficulties in computer vision. In simple terms, we need more context. So, you tried to get that context by estimating and correcting global illumination parameters, and discovered it is not enough, because different regions of the image may have different reflectance properties, or be at different angles to the light source. If you continue with this approach, the next step is to segment the image into regions based on appearance, and equalize the histograms in each region separately. Try Watershed Segmentation for instance.

There is a whole other approach. The background may actually not be the most informative cue here, why start with it? You can turn to the Viola-Jones approach instead, and work your way up from there. Once you get it working, add information from the background to increase quality.

Anatoliy Kats
  • 123
  • 1
  • 1
  • 9
0

You didn't mention what color space you use. I assume RGB? There are color space types which handle brightness parameter by default so you can apply background subtraction algorithm using such information. Take a look at http://en.wikipedia.org/wiki/HSV_color_space

Adi
  • 1,296
  • 10
  • 13
  • I'm currently using RGB color space (BGR in opencv from what I gather). I'll try to convert to another space and see if this will help, thanks Adi. – Bubsy Bobcat Apr 30 '13 at 07:56
0

Well...just saying, I'm not sure if this will work, but you can apply the subtraction based on a bigger area (like a kernel) and then place at that point the histogram variation. E.g. lets say our kernel patch is set to 30x30 pixels and we apply this on the pixels p(x,y) from the background image and q(x,y) from the test image (p and q are the same respective positions on each image). As as result we have a 30x30 subtracted patch image.

Now, try to work on this subtracted patch products like its histogram. A zero histogram means identical compared patches. An "almost flat" histogram might mean the same. Otherwise, consider it as foreground.

LovaBill
  • 5,107
  • 1
  • 24
  • 32