I am working on a video processing project which takes camera feed as input and have got a static background. I dont need any sort of dynamic background generation as that of BackgroundSubtractorMOG in opencv. I am trying to bound foreground object inside bounding boxes. So for that this is what I did
cv::absdiff(back_frame,matProcessed,temp); //back_frame is the background matProcessed is the frame from CAMERA in grayscale
cv::threshold(temp,temp,20,255,THRESH_BINARY);
cv::findContours(temp,contours,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
std::vector<Rect> boundRect( contours.size() );
std::vector< std::vector<Point> > contours_poly( contours.size() );
int j=0;
for(int i = 0; i < contours.size(); i++ )
{
if(contourArea(contours[i])>100)
{
approxPolyDP( Mat(contours[i]), contours_poly[i], 10, true );
boundRect[j] = boundingRect( Mat(contours_poly[i]) );
j++;
}
}
cv::rect r;
for (int i = 0; i < boundRect.size(); i++)
{
r = boundRect[i];
cv::rectangle(
frame,
cv::Point(r.x, r.y),
cv::Point(r.x + r.width, r.y + r.height),
CV_RGB(0,255,0)
);
}
But the problem is I am not getting the foreground correctly. Is there anyway I can improve the foreground generation and always bound the foreground object using some rectangular bounding boxes irrespective of the background complexity and other factors??