Let's say I'm using raymarching to render a field function. (This on the CPU, not the GPU.) I have an algorithm like this crudely-written pseudocode:
pixelColour = arbitrary;
pixelTransmittance = 1.0;
t = 0;
while (t < max_view_distance) {
point = rayStart + t*rayDirection;
emission, opacity = sampleFieldAt(point);
pixelColour, pixelTransmittance =
integrate(pixelColour, pixelTransmittance, emission, absorption);
t = t * stepFactor;
}
return pixelColour;
The logic is all really simple... but how does integrate()
work?
Each sample actually represents a volume in my field, not a point, even though the sample is taken at a point; therefore the effect on the final pixel colour will vary according to the size of the volume.
I don't know how to do this. I've had a look around, but while I've found lots of code which does it (usually on Shadertoy), it all does it differently and I can't find any explanations of why. How does this work, and more importantly, what magic search terms will let me look it up on Google?