Given N x-ray images with different exposure doses, I must combine them into a single one which condenses the information from the N source images. If my research is right, this problem falls in the HDRI cathegory.
My first approach is a weighted average. For starters, I'll work with just two frames.
Let A
be the first image, which is the one with lowest exposure and thus is set to weigh more in order to highlight details. Let B
be the second, overexposed image, C
the resulting image and M
the maximum possible pixel value. Thus, for each pixel i
:
w[i] = A[i]/M
C = w[i] * A[i] + ( 1 - w[i] ) B[i]
An example result of applying this idea:
Notice how the result (third image) nicely captures the information from both source images.
The problem is that the second image has discontinuities around the object edges (this is unavoidable in overexposed images), and that carries on to the result. Looking closer...
The best reputed HDR software seems to be Photomatix, so I fooled around with it and no matter how I tweaked it, the discontinuities always appear in the result.
I think that I should somehow ignore the edges of the second image, but I must be do it in a "smooth way". I tried using a simple threshold but the result looks even worse.
What do you suggest? (only open source libraries welcome)