6

I am trying to render views of a 3D mesh in VTK, I am doing the following:

vtkSmartPointer<vtkRenderWindow> render_win = vtkSmartPointer<vtkRenderWindow>::New();
vtkSmartPointer<vtkRenderer> renderer = vtkSmartPointer<vtkRenderer>::New();

render_win->AddRenderer(renderer);   
render_win->SetSize(640, 480);

vtkSmartPointer<vtkCamera> cam = vtkSmartPointer<vtkCamera>::New();

cam->SetPosition(50, 50, 50);
cam->SetFocalPoint(0, 0, 0);
cam->SetViewUp(0, 1, 0);
cam->Modified();

vtkSmartPointer<vtkActor> actor_view = vtkSmartPointer<vtkActor>::New();

actor_view->SetMapper(mapper);
renderer->SetActiveCamera(cam);
renderer->AddActor(actor_view);

render_win->Render();

I am trying to simulate a rendering from a calibrated Kinect, for which I know the intrinsic parameters. How can I set the intrinsic parameters (focal length and principle point) to the vtkCamera.

I wish to do this so that the 2d pixel - 3d camera coordinate would be the same as if the image were taken from a kinect.

genpfault
  • 51,148
  • 11
  • 85
  • 139
Aly
  • 15,865
  • 47
  • 119
  • 191

3 Answers3

11

Hopefully this will help others trying to convert standard pinhole camera parameters to a vtkCamera: I created a gist showing how to do the full conversion. I verified that the world points project to the correct location in the rendered image. The key code from the gist is pasted below.

gist: https://gist.github.com/decrispell/fc4b69f6bedf07a3425b

  // apply the transform to scene objects
  camera->SetModelTransformMatrix( camera_RT );

  // the camera can stay at the origin because we are transforming the scene objects
  camera->SetPosition(0, 0, 0);
  // look in the +Z direction of the camera coordinate system
  camera->SetFocalPoint(0, 0, 1);
  // the camera Y axis points down
  camera->SetViewUp(0,-1,0);

  // ensure the relevant range of depths are rendered
  camera->SetClippingRange(depth_min, depth_max);

  // convert the principal point to window center (normalized coordinate system) and set it
  double wcx = -2*(principal_pt.x() - double(nx)/2) / nx;
  double wcy =  2*(principal_pt.y() - double(ny)/2) / ny;
  camera->SetWindowCenter(wcx, wcy);

  // convert the focal length to view angle and set it
  double view_angle = vnl_math::deg_per_rad * (2.0 * std::atan2( ny/2.0, focal_len ));
  std::cout << "view_angle = " << view_angle << std::endl;
  camera->SetViewAngle( view_angle );
decrispell
  • 536
  • 5
  • 6
  • Thanks for this answer! I've spent a week trying to make the vtk camera to view the same I would expect to view with a pinhole camera and I almost get it correct but there was always some difference. Leaving the camera still and moving the scene worked well! – martinako Dec 14 '16 at 14:01
3

I too am using VTK to simulate the view from a kinect sensor. I am using VTK 6.1.0. I know this question is old, but hopefully my answer may help someone else.

The question is how can we set a projection matrix to map world coordinates to clip coordinates. For more info on that see this OpenGL explanation.

I use a Perspective Projection Matrix to simulate the kinect sensor. To control the intrinsic parameters you can use the following member functions of vtkCamera.

double fov = 60.0, np = 0.5, fp = 10; // the values I use
cam->SetViewAngle( fov );             // vertical field of view angle
cam->SetClippingRange( np, fp );      // near and far clipping planes

In order to give you a sense of what that may look like. I have an old project that I did completely in C++ and OpenGL in which I set the Perspective Projection Matrix similar to how I described, grabbed the z-buffer, and then reprojected the points out onto a scene that I viewed from a different camera. (The visualized point cloud looks noisy because I also simulated noise).

If you need your own custom Projection Matrix that isn't the Perspective flavor. I believe it is:

cam->SetUserTransform( transform );  // transform is a pointer to type vtkHomogeneousTransform

However, I have not used the SetUserTransform method.

0

This thread was super useful to me for setting camera intrinsics in VTK, especially decrispell's answer. To be complete, however, one case is missing: if the focal length in the x and y directions are not equal. This can easily be added to the code by using the SetUserTransform method. Below is a sample code in python :

 cam = self.renderer.GetActiveCamera()
 m = np.eye(4)
 m[0,0] = 1.0*fx/fy
 t = vtk.vtkTransform()
 t.SetMatrix(m.flatten())
 cam.SetUserTransform(t)

where fx and fy are the x and y focal length in pixels, i.e. the two first diagnoal elements of the intrinsic camera matrix. np is and alias for the numpy import.

Here is a gist showing the full solution in python (without extrinsics for simplicity). It places a sphere at a given 3D position, renders the scene into an image after setting the camera intrinsics, and then displays a red circle at the projection of the sphere center on the image plane: https://gist.github.com/benoitrosa/ffdb96eae376503dba5ee56f28fa0943

Ben
  • 441
  • 3
  • 10