I am working on an ANPR application for UAE license plates which differ themselves from one another in shapes and partial different colours. As to make a prototype I like to use transfer learning with around 1000 images of license plates. My first step is to detect license plate and then extract it.
For that I have some doubts:
- How should the image of license plate be fed for training? Should it contain license plate only or license plate with some other parts of the car?
- What should be the minimum size of an image regarding bytes? Is it okay if I use an image of few KiloBytes?