How can I calculate eye space intersection coordinates in an OptiX program?
My research showed that only object and world coordinates are provided, but I cannot believe that there is no way to get the eye space coordinates.
How can I calculate eye space intersection coordinates in an OptiX program?
My research showed that only object and world coordinates are provided, but I cannot believe that there is no way to get the eye space coordinates.
It is possible to rotate the intersection point by the camera orientation like this:
__device__ void worldToEye(float3& pointInOut)
{
const float3 Un = normalize(U);
const float3 Vn = normalize(V);
const float3 Wn = normalize(W);
const float viewMat[3][3] = {{Un.x, Un.y, Un.z},
{Vn.x, Vn.y, Vn.z},
{Wn.x, Wn.y, Wn.z}};
float point[3] = {pointInOut.x, pointInOut.y, pointInOut.z};
float result[3] = {0.0f, 0.0f, 0.0f};
for (int i=0; i<3; ++i)
{
for (int j=0; j<3; ++j)
{
result[i] += viewMat[i][j] * point[j];
}
}
pointInOut.x = result[0];
pointInOut.z = result[1];
pointInOut.y = result[2];
}
With the input point calculated:
float3 hit_point = t_hit * ray.direction;
worldToEye(hit_point);
prd.result = hit_point;
Optix has no eye coord. because it's based on ray tracing not rastering. First you should ask yourself what a eye coord. used for in shaders base on rastering. Basiclly for depth test, clipping etc. But all these are not a thing in ray-tracing shaders. When a ray casts from a point in world coord with a certain direction, the following executions are all in world coord. There is no clipping because all rays are basically stand for specific pixels. and There is no depth test because all rays are detected in intersection program, only the nearest hit point will be delivered to closed hit program. so in conclusion you should give up some mechanisms or pipe lines used in rastering based shadering, and gain some new skills used in ray-tracing based shadering.
poor English, my apologizes :)