1

I am working on a video processing project which takes camera feed as input and have got a static background. I dont need any sort of dynamic background generation as that of BackgroundSubtractorMOG in opencv. I am trying to bound foreground object inside bounding boxes. So for that this is what I did

cv::absdiff(back_frame,matProcessed,temp);         //back_frame is the background matProcessed is the frame from CAMERA in grayscale
cv::threshold(temp,temp,20,255,THRESH_BINARY);

cv::findContours(temp,contours,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
std::vector<Rect> boundRect( contours.size() );
std::vector< std::vector<Point> > contours_poly( contours.size() );

int j=0;
for(int i = 0; i < contours.size(); i++ )
{
           if(contourArea(contours[i])>100)
           {
               approxPolyDP( Mat(contours[i]), contours_poly[i], 10, true );
               boundRect[j] = boundingRect( Mat(contours_poly[i]) );
               j++;
           }


 }
 cv::rect r;
   for (int i = 0; i < boundRect.size(); i++)
    {
        r = boundRect[i];

        cv::rectangle(
            frame,
            cv::Point(r.x, r.y),
            cv::Point(r.x + r.width, r.y + r.height),
            CV_RGB(0,255,0)
        );
    }

But the problem is I am not getting the foreground correctly. Is there anyway I can improve the foreground generation and always bound the foreground object using some rectangular bounding boxes irrespective of the background complexity and other factors??

hunter
  • 111
  • 1
  • 1
  • 4

1 Answers1

0

There are various simple and complex methods to do this. Going with a pixel-wise probabilistic approach is definitely recommended. You can also use stuff like Markov models on the appearance to refine your result. Refer to this paper, specifically the Related Work section and the last bit where they refine the foreground objects.

Zaphod
  • 1,927
  • 11
  • 13