I am working on a dataset (training + testing) which contains a different shopping cart items (eg: biscuits, soaps etc.) with different backgrounds. I need to predict the product ID for all testing images (product IDs are unique for each product, let's say Good-day 10 rs is having product ID 1 and so on... for different products )
My approach was to :
Extract the foreground from the image.
Apply sift/surf algorithm for finding matching keypoints.
However, the results are not satisfactory.
This is the input image:
This is the output image:
As you can see the bounding box generated by Haar-cascade doesn't cover the whole biscuit packet correctly.
Can you please tell me how to achieve bounding boxes correctly using Haar-cascade classifier (positive images dataset and negative images folder consists of persons and different climate conditions).
I know that in my dataset each biscuit packets are distinct products and contains only one image for a particular product, is this the reason why my Haar-cascade is not performing well?
If yes: please specify the data preprocessing steps to do.
And also specify other foreground extraction algorithms that solves my problem