This is a bit of a complicated problem, so I'll do my best to break it down into chunks.
I'm writing a 3D Python library for the sake of learning / fun (as opposed to one that I'd intend for others to use). In the system I've developed, three-dimensional points are generally flattened to the image as follows:
- Increasing the Z index by
width
moves the point halfway to the vanishing point in the center. - At
Z = 0
, the X and Y values correspond directly to the pixel at X, Y.
(There might be a name for this method, but if there is, I'm not familiar with it.)
In Python:
# vx and vy are the vanishing point's coordinates
def flatten_point(width, vx, vy, x, y, z):
distance = (x - vx, y - vy)
flat_distance = [d / (1 + float(z) / width) for d in distance]
return (vx + flat_distance[0], vx + flat_distance[1])
At this point, I'm able to create triangles somewhat efficiently by flattening its vertices and using barycentric coordinates to find and fill in the pixels that fall between those three points. This works well enough if I don't need to know anything about the actual points on the triangle that those pixels correspond to, but if I want to shade the triangle so that deeper points are drawn darker, I need to know what unflattened point on the triangle the pixel corresponds to.
joriki on math.stackexchange recommended using the barycentric coordinates as weights to find the original point. This did appear to work for awhile -- and it probably would work if I were using a linear depth system -- but it falls apart when the depths of the triangle's points differ by enough. The triangle appears to approach the greatest depth more quickly than it actually does, as if it were curved backwards.
So, in short: how can I reverse the point flattening function to get the actual 3D point of an arbitrary 2D pixel on a flattened triangle? Alternatively, if there is a better / more efficient way to flatten triangles without losing the depth of each pixel, that would work too.