6

Can someone help me with the depth of field implementation in Ray Tracer please?

I am using a simple pin-hole camera model as shown below. I need to know how we can generate DOF effect using pin-hole camera model? (image is taken from wikipedia)

enter image description here

My basic ray tracer in working fine.

I have eye at (0,0,0,1) with direction at (dx, dy , 1.0f, 0.0f) where

float dx = (x * (1.0 / Imgwidth) ) - 0.5;
float dy = (y * (1.0 / Imgheight) ) - 0.5;

Now everywhere I read they are talking about sampling a lens which should be placed between the image plane and the Scene. for example as shown below(image taken from wikipedia):

How can I introduce a lens in front of an image plane if the rays are coming from one single point position(camera or eye)?

If someone can help than it will be great!

thank you

sinner
  • 803
  • 2
  • 11
  • 24

2 Answers2

12

There are 3 ways to do this:

  1. The physically-correct DOF would require multiple renders of the scene. Cameras have depth-of-field because they are not really a pinhole model. Instead, they have an aperture that allows light in within a certain diameter. This is equivalent to taking a pinhole camera and taking a lot of pictures within that aperture and averaging them.

    So basically, you need to rotate your camera slightly multiple times around your focus point, render the entire scene, accumulate the output colour in a buffer and divide all values by the number of renders.

  2. A simple post processing effect - render not only the scene colour, but also its depth, then use this depth to control blur effect strength. Note that this is technique requires some tricks to get seamless transitions between objects at different blur levels.

  3. A more complex post processing effect - create a depth buffer as previously and then use it to render an aperture-shaped particle for every pixel of the original scene. Use the depth to control the particle size just as you would use it for blur effect strength.

(1) gives best results, but is the most expensive technique; (2) is cheapest, (3) is quite tricky but provides good cost-effect balance.

tshepang
  • 12,111
  • 21
  • 91
  • 136
IneQuation
  • 1,244
  • 11
  • 27
  • 1
    Isn't it similar to taking few sample points around my camera(or eye) position. And then render my scene from all these position. Sending a ray from each sample point to the point in focus for each pixel in the image plane by defining a focus point somewhere in the scene. – sinner Apr 04 '12 at 14:03
  • Well, yes, this is actually the same - of course you can accumulate the colour on a per-pixel basis, instead of accumulating renders of the entire scene. – IneQuation Apr 04 '12 at 14:10
  • I mean: Isn't it similar to taking: -few sample points around my camera(or eye) position. - And then render my scene from all these position by sending a ray from each sample point to the focus point through each pixel in the image plane. I am sorry but if you can explain in terms of eye position, image plane, focal plane and scene it will make me understand pretty easily. – sinner Apr 04 '12 at 14:13
  • I ll try that and will be back with more doubts :P – sinner Apr 04 '12 at 14:17
  • 1
    If I understand you correctly then yes, we are talking about the same thing. :) Just remember - you need to add the pixels from the different renders together and divide them by the number of renders after you add them all. – IneQuation Apr 04 '12 at 14:18
  • In this article [DOF](http://cg.skeelogy.com/depth-of-field-using-raytracing/) they are saying a different story. Is it also a right way to do this? – sinner Apr 04 '12 at 14:18
  • They are actually saying the very same thing. It's what I described as number 1. ;) – IneQuation Apr 04 '12 at 14:29
  • Now I am getting copies of the same sphere at different depths :-\ – sinner Apr 04 '12 at 14:44
  • Here is the output I am getting: http://gurpreetbagga.wordpress.com/2012/04/04/my-raytracer-dof/ There are multiple copies of the same four spheres I am trying to render at different depths :-\ – sinner Apr 04 '12 at 14:55
  • You are obviously not implementing any of the algorithms described. It does not suffice to scramble the directions of the rays, you need to blend the resulting colours together. – IneQuation Apr 05 '12 at 06:55
3

Here is the code I wrote to generate DOF.

void generateDOFfromEye(Image& img, const Camera& camera, Scene scene, float focusPoint)
{
    float pixelWidth = 1.0f / (float) img.width;
    float pixelHeight = 1.0f / (float) img.height;

    for (int y = 0; y < img.height; ++y)
        {
        for (int x = 0; x < img.width; ++x)
            {
            Color output(0,0,0,0);
            img(x, y) = Color(0,0,0,0);

            //Center of the current pixel
            float px = ( x * pixelWidth) - 0.5;
            float py = ( y * pixelHeight) - 0.5;

            Ray cameraSpaceRay = Ray(Vector(0,0,0,1), Vector(px, py, 1.0f, 0.0f));

            Ray ray = camera.Transform() * cameraSpaceRay;

            int depth = 0;
            int focaldistance = 2502;
            Color blend(0,0,0,0);



             //Stratified Sampling i.e. Random sampling (with 16 samples) inside each pixel to add DOF
                for(int i = 0; i < 16; i++)
                {
                //random values between [-1,1]
                float rw = (static_cast<float>(rand() % RAND_MAX) / RAND_MAX) * 2.0f - 1.0f;
                float rh = (static_cast<float>(rand() % RAND_MAX) / RAND_MAX) * 2.0f - 1.0f;
                // Since eye position is (0,0,0,1) I generate samples around that point with a 3x3 aperture size window.
                float dx =  ( (rw) * 3 * pixelWidth) - 0.5;
                float dy =  ( (rh) * 3 * pixelHeight) - 0.5;

                //Now here I compute point P in the scene where I want to focus my scene
                Vector P = Vector(0,0,0,1) + focusPoint * ray.Direction();
                Vector dir = P - Vector(dx, dy, 0.0f, 1.0f);


                ray  = Ray(Vector(dx,dy,0.0f,1.0f), dir);
                ray = camera.Transform() * ray;

                //Calling the phong shader to render the scene
                blend += phongShader(scene, ray, depth, output);

                }
            blend /= 16.0f;

            img(x, y) += blend;
            }
        } 
}

Now I don't see anything wrong here in the code. But the result I am getting is just a blurred image for the value of focuspoint > 500 as shown below: enter image description here

If you can tell what is wrong in this code then it will be very helpful :) Thanks!

sinner
  • 803
  • 2
  • 11
  • 24