I'm writing a 3D raytracer as a personal learning project (Enlight) and have run into an interesting problem related to doing intersection tests between a ray and a scene of objects.
The situation is:
- I have a number of primitives that rays can intersect with (spheres, boxes, planes, etc.) and groups thereof. Collectively I'm calling these scene objects.
- I want to be able to scene objects primitives with arbitrary affine transformations by wrapping them in a
Transform
object (importantly, this will enable multiple instances of the same primitive(s) to be used in different positions in the scene since primitives are immutable) - Scene objects may be stored in a bounding volume hierarchy (i.e. I'm doing spatial partitioning)
- My intersection tests work with
Ray
objects that represent a partial ray segment (start vector, normalised direction vector, start distance, end distance)
The problem is that when a ray hits the bounding box of a Transform object, it looks like the only way to do an intersection test with the transformed primitives contained within is to transform the Ray
into the transformed co-ordinate space. This is easy enough, but then if the ray doesn't hit any transformed objects I need to fall back to the original Ray
to continue the trace. Since Transforms may be nested, this means I have to maintain a whole stack of Ray
s for each intersection trace that is done.
This is of course within the inner loop of the whole application and the primary performance bottleneck. It will be called millions of times a second so I'm keen to minimise complexity / avoid unnecessary memory allocation.
Is there a clever way to avoid having to allocate new Ray
s / keep a Ray
stack?
Or is there a cleverer way of doing this altogether?