I'm coding a software (for my studies) for traffic sign recognition with an IP Camera.For a moment, I have to recognise traffic signs like this:
In my code, I'm doing a high transformation to isolate my traffic sign with a mask.
Then, I do a SURF comparison (with a modified sample of the OpenCV documentation of SURF) of scene image with a few images of different traffic signs (30,50,70,90).
I give you an example of my object reference: http://www.noelshack.com/2015-05-1422561271-object-exemple.jpg
My questions are:
Is my "way to do" is right? Is SURF really adapted here because it seems that it uses a lot of resources..
I have false positives (when i compare 30 in object with 50 in the scene for example) , how to reduce it ?