I am trying to write an opencl kernel for raymarching. Everything works except there is noticeable fisheye distortion in the resulting images, as seen in this example: (This is supposed to be a cube)
The problem lies in the way I construct the direction vector for each ray.
Currently, I give the kernel the camera's direction as pitch and yaw (pitch
and yaw
in my code).
Then based on the fov (fov
), coordinates of the pixel that the kernel is calculating (ix
and iy
) and the width and height of the entire frame (width
, height
) I get the pitch and yaw for the direction of the ray.
Finally I construct a unit vector using the pitch and yaw given by the previous calculations.
(varfloat
represents either float or double depending on whether the kernel is run with double or single floating point precision)
For the image above, fov
was Pi/3 and width
and height
were both 500.
unsigned int ix = get_global_id(0);
unsigned int iy = get_global_id(1);
//PROBLEM LIES IN THESE 3 LINES:
varfloat y = yaw - fov/2.0 + fov*ix/width; //get yaw using horizontal fov
varfloat f = (fov * height) / width; //calculate vertical fov from horizontal fov scaled proportionally based on width and height
varfloat p = pitch - f/2.0 + f*iy/height; //get pitch using vertical fov
varfloat3 direction = {sin(y)*cos(p), cos(y)*cos(p), sin(p)}; //unit vector for ray direction
Can someone tell me how I should be calculating the pitch and yaw for the direction vector in order to eliminate the distortion?