I have a sample of two-dimensional, black-and-white/binary barcodes that have been photographed.
The (colour) photographs typically suffer from all the usual suspects: blurring, distortion, contrast issues/lighting gradients, and erosion.
I am trying to reconstruct the original barcodes, which were once computer-generated pixel arrays of black/white values.
We should be able to exploit the images' spatial-frequency information to infer the dimensions of each pixel. The hope is to use this to better restore the original by convolving the image with such a structuring element defined by the data.
Although this is a very broad topic, I therefore have a very specific question:
What is the best way to establish a structuring element from image data in OpenCV/Python, without using prior knowledge of it?
(Assume for now that the underlying pixel scale is to some good approximation spatially invariant)
Note that I am not trying to execute the whole extraction pipeline: this question is simply about inferring an optimal structuring element from the data.
For example, the spatial kernel could be used as input to an unsharp mask, a la Python unsharp mask
References:
(1-D ideas) http://answers.opencv.org/question/174384/how-to-reconstruct-damaged-barcode, http://www.windytan.com/2016/02/barcode-recovery-using-priori.html
(Similar idea) Finding CheckerBoard Points in opencv for any random ChessBoard( pattern size not known)
(Sort of but not really, and answer-less) OpenCV find image frequencies