I am working on an application similar as in MobileFusion.
In Sec:6 they are projecting each voxel from camera space into camera view by:
vec2 = (f_x*(q_x/q_z)+c_x, f_y*(q_y/q/z)+c_y)
I am using a kinect v2 for the images and I found for these parameters:
float cx = 261.696594;
float cy = 202.522202;
float fx = 356.096588;
float fy = 368.096588;
I want to realize this fusion step with OpenGL. Now my problem is that I have value between [-1,1] and using these parameters for getting the correct image coordinates would yield to way much higher values.
Is there something I do wrong here? Or do I have to normalize these parameters ?