Hope everyone is doing great.Been pondering on this for a while,out of curiosity more than anything.Been using Maya with Arnold for a while now.Just for hobby stuff,mostly simple renders to compare with my path tracer.I realized their Renderer has this really nice feature that lets you see the image as it renders.... Progressively.It seems to start from a lower sampling and aa amount and then re-renders the image as it increases those parameters automatically.I thought it was really cool.And a nice way to show preview of renders before they reached their maximum quality.It made me very interested to do the same for my path tracer I am Working on.Currently it waits for the whole render to complete after which it saves a simple ppm file on ur drive. My question now is.....Does anyone know how something like this can be done? I have tried my best to find out and the only information I came up with was that OpenGL is involved somehow.I'm not looking to create the same thing as Maya.Just a simple window that pops up as the render starts and progressively makes the image better. Again.....this is more curiosity than anything else.As much as I think it's really cool Thanks :)
-
Stack Overflow is for _specific_ programming questions. Asking "how [this big project] can be done" is not specific. – Colonel Thirty Two Sep 21 '16 at 15:27
-
Got it.Already found an article on the Matter.Reading it now.Thanks for your help. – hecatonchries Sep 21 '16 at 16:44
2 Answers
It is not restricted to OpenGL in any way. You design your renderer to run in a separate thread (possibly multiple threads or even multiple machines) and progressively send the partial results to the main thread. The main thread then creates a window that displays those results as they come in. No magic here.

- 70,775
- 16
- 139
- 220
-
Oh ok.I was reading an article on ScratchaPixel and it seems adaptive sampling is also involved somehow.Or is that a whole different aspect to this particular issue ? – hecatonchries Sep 22 '16 at 10:45
-
@hecatonchries: it's more of an optimization on the rendered side -- you want to trace more rays where there are higher frequency features in the image and trace less where the image is smoother. – Yakov Galka Sep 22 '16 at 10:54
-
Yh.Apparently.Thats a really smart way to reduce noise.As I realized after adding Cook Torrance BRDF that I was getting way more noise now.So I get it.They blend the adaptive sampling with the process you just described and that gives the Progressive Rendering.Right ? – hecatonchries Sep 22 '16 at 10:57
-
-
Anyways I think I got the gist of the whole process.Thanks again very much. :) – hecatonchries Sep 22 '16 at 10:59
The preview image is simply the first round of samples in a monte carlo render (which Arnold is). This 'start off noisy', and then 'improve quality' is not necessarily an intended feature. It exists with ALL monte carlo renderers since the nature of performing an unbiased sampling means you begin with some samples, many of which are likely to be inaccurate (producing noise in the final image). Then, as more and more samples are fired into the scene (for each pixel) the result will eventually converge on the anticipated result (the noise will reduce and inaccurate samples contribute less and less).
Monte Carlo renders will carry on rendering forever, however after a certain amount of samples, each contribution will be minor, and so ignored (settling on the actual result). This is why the image starts noisy (not many samples, and large number of inaccurate samples) and then gradually improves its quality as more and more samples are used to estimate the pixel colour.
Progressive sampling is something else which is an optimisation aimed to reduce the time it takes to converge on a result. i.e As mentioned, firing samples in regions which are likely to contribute more (i.e greater difference when sampling pixels, more accuracy needed, so calculate more samples for this pixel).
P.S Keep looking at Scratch-a-pixel. Its an excellent resource.
P.P.S Also OpenGL could be used to aid in texture/image analysis (deciding which pixels to sample even more etc), or used to accelerate intersection tests by drawing geometry to off-screen buffers (just 2 of the ways I have used it in the past). However this would be down to the implementation. By default OpenGL offers nothing any ray-tracing system requires.
In regards to displaying your render.
First create a frame buffer which is same dimensions as the output image you require. This will effectivly be an uncompressed RGB24 or RGBA32 image. Use a format which is relevant to your desired display output (so a copy can be done with limited latency, and no conversion/processing required for direct display). Is would also include another bit of meta information which each pixel which keeps track of the number of samples currently used by the pixel. This allows results to populate the frame buffer mutually exclusive of each other. i.e you can fire more pixels in areas that you require(adaptive) and you choose to present the contents of the frame buffer when desired, while continuing to sample pixels within the same rendering context (progressive).
This frame buffer should be persisted each cycle of your main loop so that results from the each loop are accumulated into the frame buffer. To calculate the result of a single pixel, its usually the sum of all the samples for that pixel divided by the total number of samples (other sampling methods may weigh samples accordingly) for standard jittered grid sampling for a pixel.
To present this image to the user, this depends on what api you use, but all you need to do a display the framebuffer the same way would display an image/bitmap. Ways I have personally done it:
Draw a textured quad in openGL, using the framebuffer as the texture (so you need to update the texture with the contents of the framebuffer each frame).
Use windows gdi to render a DIB bitmap to a control.
Output to an uncompressed image format (this can be quickly done to binary PPM, or TGA/tiff/uncompressed bitmap for copying the contents of the frame buffer directly) or compressed image such as png or jpg.
Here is some code, the implementation depends on which api's you choose to employ, however hopefully this is pseudo enough to describe what's going on with a bit of detail. It's c-esque.
Declarations/Definitions.
// number of pixels in the resultant final render image.
unsigned int numberOfPixels = imageWidth * imageHeight;
// number of channels of ouput image (also ray result) RGB 3 channels.
unsiged int channelSize = 3;
// RGB pixel. Each channel/colour component is in the range 0 <= ... <= 1
struct pixel
{
float red;
float green;
float blue;
};
// framebuffer, 3 channels RGB
pixel frameBuffer[numberOfPixels];
// framebuffer meta data. Number of samples for each pixel.
int pixelSampleCount[numberOfPixels];
Then in your init method. To initialize the framebuffer, this sets it to a black image (important since we want to add the first samples to 0,0,0).
// your init routine
...
for (unsiged int p = 0; p < numberOfPixels; ++p )
{
// initialise the framebuffer to black (0,0,0)
frameBuffer[p].red = 0.0;
frameBuffer[p].green = 0.0;
frameBuffer[p].blue = 0.0;
// set the sample count to 0
pixelSampleCount[p] = 0;
}
...
Then in the main loop/cycle.
// your main loop
...
// Main loop...each cycle we will cast a single sample for each pixel. Of course you can get as many sample results as you want if you
// intelligently manage the casting (adaptive), ensure each cast knows which pixel it is contributing to, so when it comes to accumulation of
// this sample result, it can be amended to the correct pixel and the correct sample count incremented as you add to the framebuffer.
for ( unsigned int x = 0; x < imageWidth; ++x )
{
for ( unsigned int y = 0; y < imageHeight; ++y )
{
// get the result of the sample for this pixel (e.g cast the ray for this pixel, jittered according to sampling method). Ultimately
// each sample needs to be different (preferably unique and random) from the previous cycle and will return a different result.
pixel castResult = GetSampleResult(x, y, ... ); // aka cast the ray and get the resultant 'colour'
// Get the current pixel from the frame buffer read to ammend it with the new sample/contribution.
unsigned int currentPixelIndex = (y * imageWidth) + x;
pixel& pixelOfSample = frameBuffer[currentPixelIndex];
// to correctly accumulate this sample, we must first multiply (scale up) each colour component
// by the number of samples/contributions to this pixel. We can then add the sample result and divide
// (scale down) the result (sum of all samples now) by the new number of samples.
pixelOfSample.red = ( (pixelOfSample.red * pixelSampleCount[currentPixelIndex]) + castResult.red ) / ( pixelSampleCount[currentPixelIndex] + 1 );
// repeat this for the rest of the components in the pixel, i.e for green and blue in this case.
pixelOfSample.green = ( (pixelOfSample.green * pixelSampleCount[currentPixelIndex]) + castResult.green ) / ( pixelSampleCount[currentPixelIndex] + 1 );
pixelOfSample.blue = ( (pixelOfSample.blue * pixelSampleCount[currentPixelIndex]) + castResult.blue ) / ( pixelSampleCount[currentPixelIndex] + 1 );
// increment the sample count for this pixel.
++pixelSampleCount[currentPixelIndex];
}
}
// And then send this to your output gdi/opengl/image output etc.
// For displaying direct in gdi, use BitBlt(...) with SRCCOPY.
// For displaying in OpenGL use glTexture2D(...)
glTexture2D(...); // for OpenGL
BitBlt(...); // for win gdi
// The next loop you can simply display the framebuffer (it would look the same as previous cycle) or you can fire a load of rays and then add this to your framebuffer and display that, giving you a different display.
...
N.B Both GL and gdi expect the image in different ways. So you may need to horizontally flip the image to be the correct orientation. This depends on how you store your framebuffer internally and which api you use to display the framebuffer.
This hopefully shows how to write a system that will progressively display the contents of an image as more and more detail is calculated for it. It applies to any form of ray tracing (as said before, monte carlo will produce noise given the nature of the simulation, biased renders may or may not depending on how they work. Usually its not a lot more than anti-aliasing the image, although noise can be present with biased renders).

- 1,037
- 6
- 19
-
I already know about the sampling stuff.My question was really about how such a window which does this stuff could be coded.Thats all.Is it a type of window which uses its game loop to update the pixels from a cpu thread.Or is some other way.I have some theories and ideas though.But I don't know if I should create a normal win32 window or if another path exists since my path tracer is cpu based.And yh....Scratch a pixel is excellent.They help a lot. – hecatonchries Sep 23 '16 at 15:32
-
-
P.S the answer will render in the same thread, however some intelligence will need to be applied in order to not throttle the responsiveness of the application. Of course you can easily do the rendering in another thread and populate the framebuffer. – lfgtm Sep 26 '16 at 11:54
-
This really helped a lot.You are one of a kind ! Still working on some kinks and errors but I will sort them out soon.Thanks again for ur help. – hecatonchries Oct 07 '16 at 22:15