0

I am working on an application similar as in MobileFusion.

In Sec:6 they are projecting each voxel from camera space into camera view by:

vec2 = (f_x*(q_x/q_z)+c_x, f_y*(q_y/q/z)+c_y)

I am using a kinect v2 for the images and I found for these parameters:

    float cx = 261.696594;
    float cy = 202.522202;
    float fx = 356.096588;
    float fy = 368.096588;

I want to realize this fusion step with OpenGL. Now my problem is that I have value between [-1,1] and using these parameters for getting the correct image coordinates would yield to way much higher values.

Is there something I do wrong here? Or do I have to normalize these parameters ?

Dominick
  • 291
  • 2
  • 13
  • What is the difference between "camera space" and "camera view"? What all those vars represent? What those numbers represent? – Ripi2 Mar 23 '17 at 20:38
  • Coordinates in camera space are relative to the camera position, so with the camera placed at position(0,0,0). So we have 3D points in the scene. The camera view is there position placed on the image plane (x,y). So I need a projection which projects the coordinates from (x,y,z) to (x',y',-1). The variable f_x, f_y are the focal length parameters of a camera device. c_x and c_y are the principle points – Dominick Mar 24 '17 at 06:38

0 Answers0