if you want the "physically correct" approach then
- create a FIFO of
N
images.
inside each scene redraw (assuming constant fps)
- if FIFO already full throw out the oldest image
- put the raw rendered scene image in the FIFO
blend all the images in the FIFO together
If you have big N
then to speed things up You can store also the cumulative blend image off all images inside FIFO. Adding inserted image to it and substracting removing image from it. But the target image must have enough color bits to hold the result. In such case you render the cumulative image divided by N
.
- render the blended image to screen
For constant fps is the exposure time t=N/fps
. If you do not have constant fps then you need to use variable size FIFO and Store the render time along with image. If sum of render times of images inside FIFO exceeds the exposure time throw oldest image out...
This approach requires quite a lot of memory (the images FIFO) but does not need any additional processing. Most blur effects fake all this inside geometry shader or by CPU by blurring or rendering differently the moving object which affect performance and sometimes is a bit complicated to render.