I am interested in obtaining a 2D representation of a point cloud mapped from a known view point from which a registered optical image of the scene was taken using a digital camera. I have found an example of this here: http://live.ece.utexas.edu/research/3dnss/live_color_plus_3d.html which was achieved using Opencv (reprojected with a pinhole camera model using uscalib3D as far as I can tell), though I did not get a response from the authors as to how this was done.
I want the output range map raster to be of the same resolution in xy / same field of view as the registered optical image so that I can associate the pixel coordinates of the optical image to the xyz coordinates of the point cloud (having a raster which stores the xyz coordinates of the point cloud would therefore be more useful that a range map storing distance to the sensor). Given that the point cloud was collected from a single laser scan position located ~20 cm below the camera, we can assume that there are no occluded points from the view of the camera. The point density of the point cloud will be lower than the pixel spacing of the registered image so some smoothing of the range map must be assumed.
The inputs are:
1) 4x4 camera pose matrix
2) mx3 matrix of x,y,z coordinates of the point cloud
3) undistorted image taken from the viewpoint described in (1)
I mainly code using Matlab, so I would prefer that the solution is achievable in this programming environment (opencv mex etc should be ok). Any advice and/or suggestions would be greatly appreciated.
Thomas