I've been developing an Automatic License Plate Recognition (ALPR) system. I've successfully implemented the license plate detection using YOLOv8 and am now moving onto the recognition part.
My challenge lies with the preprocessing and binarization of license plate images, particularly under different conditions such as various lighting situations, weather conditions, and transitions from day to night.
Preprocessing: What strategies might be effective for preprocessing these images under such varying conditions before they're fed into an OCR system like Tesseract?
Binarization: Are there any specific techniques or best practices to binarize these images, considering the diversity of their capturing conditions?
Post-processing: After the OCR process, how can potential errors or inconsistencies in the output be best handled?
Any insights, best practices, or references to tackle these issues would be greatly appreciated.
Here are some of the license plate images I am trying to preprocess:
Thanks.
What I have tried:
I attempted basic image preprocessing techniques like grayscale conversion and global thresholding for binarization. However, the results are subpar, especially for images taken under poor lighting conditions or at night.
What I expected:
I was expecting the OCR output to be relatively accurate after preprocessing and binarization. However, due to the varying image conditions, the performance of the OCR is not up to the mark.