I'm writing software for a solar panel inspection system and need to stitch camera images taken in a electroluminescence machine together. These images have only very few features and low contrast (as shown below) and the OpenCV image stitcher does not work by itself.
The approximate overlapping area of both images is known but I need the result as accurate as possible. I tried shifting one image over the other and computed different distance measures over the ROI, but without satisfying results. The SSD distance does not work due to vignetting and differences in the pixel intensity. Normalized Gradients or Cross Correlation were not robust either.
Any idea how to preprocess the images for the stitcher to work? Or is there another way to tackle this? Dark cells are not always present, making them not reliable as a feature.
There are vertical cell edges (nearly invisible due to bad contrast), but I have no method of detecting them either. If these could be detected, I could align the images with the cell edges.
Any help is much appreciated.