I am trying to detect an object in a video. i am using SURF as feature detection and descriptor extractor, and BRUTFORCE as matcher. i tested my work with faces, i captured a picture of me and when i run the camera and direct it toward me, my face gets detected and a rectangle is drawn around it. i tried to make another test, i captured an image of my mouse and resized it, and when i run the cam, it is not getting detected
the problems i am facing are:
1-is the size of the query/object image matters in such cases,? i am asking this question because the image i captured of my self is bigger than the one of the mouse, and the face is getting detected and the mouse not.
2-regardless of which image i am using as a query/object iamge, how to display camera preview of only the train/scene image without the query/object image. i am asking this question because, what i am getting is something as shown in the below posted images, while what i want to do is something as it is shown here, i checked the code in that link, it is in C++ but i followed the same thing and also the tutorial uses 'drawMatches' method which has a peer in java which is Features2D.DrawMatches()
and both of them returns a Mat object with the query/object image on the left side and the train/scene image on the right side as also shown in the image i posted below.
what i want to do is, to display on the the camera output without the query/object image, i want the area designated for the camera output is to show only the train/scene image captured from the camera.
please let me know how to solve this issues, i want to do something as shown in the tutorial i cited in the link.