Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenCV Background subtraction with varying illumination

I'm working on a project where I have to automatically segment different parts of a car (i.e. door, headlight, etc.) in an image provided by a cam.

In a first step I'd like to remove the background, so the algorithm won't find anything where it's not supposed to.

I also have the image of just the background, but the illumination is very different due to exposure time, reflection of light of the car, etc.

I tried to get rid of the BG by simple subtraction, unfortunately due to the very different lighting conditions this didn't turn out to be very helpful.

So next I applied a histogram equalization, but this also didn't help very much.

How can I get rid of the background in this differently lighted scene? Is there a OpenCV method that I could use with these two images?

like image 203
Bubsy Bobcat Avatar asked Oct 16 '25 15:10

Bubsy Bobcat


2 Answers

Opencv has three different methods for background subtraction:

BackgroundSubtractorGMG  bs_gmg;
BackgroundSubtractorMOG  bs_mog;
BackgroundSubtractorMOG2 bs_mog2;

Mat foreground_gmg;
bs_gmg  ( image,  foreground_gmg, -1.0 );
Mat foreground_mog;
bs_mog  ( image,  foreground_mog, -1.0 );
Mat foreground_mog2;
bs_mog2 ( image, foreground_mog2, -1.0 );

You can read about them and use the one that works best for you.

like image 108
Safir Avatar answered Oct 19 '25 08:10

Safir


My experience suggests that the illumination conditions can have so much variation, two images are simply not enough. You started with a pixel-based approach, making a simple pixel-by-pixel subtraction of the two images, but the illumination changes make the colors appear very different, even in HSV spaces. This is a case of the aperture problem, one of the most basic difficulties in computer vision. In simple terms, we need more context. So, you tried to get that context by estimating and correcting global illumination parameters, and discovered it is not enough, because different regions of the image may have different reflectance properties, or be at different angles to the light source. If you continue with this approach, the next step is to segment the image into regions based on appearance, and equalize the histograms in each region separately. Try Watershed Segmentation for instance.

There is a whole other approach. The background may actually not be the most informative cue here, why start with it? You can turn to the Viola-Jones approach instead, and work your way up from there. Once you get it working, add information from the background to increase quality.

like image 34
Anatoliy Kats Avatar answered Oct 19 '25 09:10

Anatoliy Kats