I'm trying to implement a GPU raytracer that traverses an octree given it's AABB. I'm using "An Efficient Parametric Algorithm for Octree Traversal" as my basis for this project.
During the implementation, I found some problems with negative direction components. My ray origin is at (0, 0, -3) and my ray direction is calculated based on the fragment coordinates of the pixel. So, in the end all my directions will point to -z and a x and y based on pixel coordinates. I'm also using FOV calculations to have some perspective applied.
The problem present as it follows. For the positive quadrant of direction values (where x and y direction components are positive), the ray-box intersection function works properly, but, at any other quadrant with any negative component, the function won't work as the color of the calculated pixel is just white, meaning a point of non intersection.
I thought that, by following the proper transformation needed to negative direction components as specified in the paper, the function would work and present the right color, but that is not what it's happening.
Here follows the code (GLSL):
// Ray and box structures
struct Ray {
vec3 origin;
vec3 dir;
vec3 invDir;
};
struct Box {
vec3 minPoint;
vec3 maxPoint;
vec3 centerPoint;
vec3 dimensions;
};
// Ray box intersection (based on paper implementation)
bool _rayBoxIntersection(Ray ray, Box box, inout vec3 t0, inout vec3 t1, inout float tmin, inout float tmax) {
t0 = (box.minPoint - ray.origin) * ray.invDir;
t1 = (box.maxPoint - ray.origin) * ray.invDir;
tmin = max(max(t0.x, t0.y), t0.z);
tmax = min(min(t1.x, t1.y), t1.z);
return tmin < tmax;
}
// Raytrace scene function
vec3 rayTraceScene(Ray ray) {
Box volume;
volume.minPoint = ubo.octreeMinPoint;
volume.maxPoint = ubo.octreeMaxPoint;
volume.centerPoint = (volume.maxPoint + volume.minPoint) / 2.0;
volume.dimensions = volume.maxPoint - volume.minPoint;
// Not calculating the mask needed for right traversal yet
if (ray.dir.x < 0) {
ray.origin.x = volume.dimensions.x - ray.origin.x;
ray.dir.x *= -1.0;
}
if (ray.dir.y < 0) {
ray.origin.y = volume.dimensions.y - ray.origin.y;
ray.dir.y *= -1.0;
}
if (ray.dir.z < 0) {
ray.origin.z = volume.dimensions.z - ray.origin.z;
ray.dir.z *= -1.0;
}
vec3 t0, t1;
float tmin, tmax;
if (!_rayBoxIntersection(ray, volume, t0, t1, tmin, tmax)) return vec3(1, 1, 1);
return vec3(1, 0, 0);
}
// main function and ray declaration
// Please, ignore the fact that I'm using pure constants for the FOV and screen dimensions
void main() {
vec2 fragCoord = vec2(
(2.0 * (fragTexCoord.x) - 1.0) * tan(radians(90.0 / 2.0)) * (800.0 / 600.0),
(1.0 - 2.0 * (fragTexCoord.y)) * tan(radians(90.0 / 2.0))
);
Ray ray;
ray.origin = vec3(0, 0, -3);
ray.dir = normalize(vec3(fragCoord.xy, 1.0));
ray.invDir = 1.0 / ray.dir;
vec3 color = rayTraceScene(ray);
outColor = vec4(color, 1.0);
}
Running this code with an octree that has an AABB with min point at (-1, -1, -1) and max point at (1, 1, 1), I get the following result:
It's obviously wrong because the AABB parts that are contained in the negative quadrants are not rendered.