3

I'm writing an application that averages/combines/stacks a series of exposures. This is commonly used to reduce noise in the resultant image.

However, it seems, to optimize the average/stack the exposures are usually first normalized. It seems that this process assigns weights to each of the exposures and then proceeds to combine them. I am guessing that the process computes the overall intensity of each image as the purpose is to match the intensities of all the images in the stack.

My question is, how can I incorporate an algorithm that will allow me to normalize a series of images? I guess the question be generalized by instead asking "How can I normalize a series of readings?"

An outline in my head appears as follows:

  • Compute the average of a reference image.
  • Divide the average of each frame by the average of the the reference frame.
  • The result of each division is the weight for each frame.
  • Scale/Multiply each pixel in a frame by the weight found for that particular frame.

Does this seem to make sense to anyone? I have tried to google for the past hour but didn't found anything. Also took at the indices of various image processing books on Amazon but that didn't turn up anything either.

saad
  • 1,225
  • 15
  • 31

1 Answers1

2

Each integration consists of signal and assorted noise - some is time-independent (e.g. bias or CCD readout noise), some time-dependent (e.g dark current), and some is random (shot noise). The aim is to remove the noise, and leave the signal. So you would first subtract the 'fixed' sources using dark frames (which will include dark current, readout and bias) leaving signal plus shot noise. Signal scales as flux times exposure time, shot noise as the square root of the signal

http://en.wikipedia.org/wiki/Shot_noise

so overall your signal/noise scales as the square root of the integration time (assuming your integrations are short enough that they are not saturated). So by adding frames you are simply increasing the exposure time, and hence the signal/noise ratio. You don't need to normalize first.

To complicate matters, transient non-Gaussian noise is also present (e.g. cosmic ray hits). There are many techniques for dealing with these, but a common one is 'sigma-clipping', where you have an extra pass to calculate the mean and standard deviation of each pixel, and then reject outliers that are many standard deviations from the mean. Real signal will show Gaussian fluctuations around the mean value, whereas transients will show a large deviation in one frame of the stack. Maybe that's what you are thinking of?

strmqm
  • 1,274
  • 1
  • 9
  • 17
  • Yes, I was thinking of Sigma Clip indeed. I am now using a normalization method based on least squares. I calc. the best fit between the pixels of the reference frame and frame to be processed. This provides me with the offset and scale factor for each pixel. I add the offset and multiply with the scaling factor. The end result is an image which matches the reference frame in background level and overall brightness. Has worked quite well so far. – saad Jul 19 '11 at 19:13