I have a picture of a checkerboard taken from an arbitrary camera angle. I find the two vanishing points corresponding to the two sets of lines that form the checkerboard grid. From these two vanishing points, I compute a homography from the checkerboard plane to the image plane.
I then apply the inverse homography to re-render the checkerboard from a top view. However, for certain images, the re-rendered top view is very large. That is, due to the camera angle, the inverse homography stretches certain parts of the image (i.e. the regions of the image that are very close to one of the vanishing points) to be very large.
This takes up an unnecessarily large amount of memory, and most of the region that becomes highly stretched is stuff I do not need. So, when applying the inverse homography, I would like to avoid rendering regions of the image that will be highly stretched. What is a good way to do this?
(I am coding in MATLAB)