0

I'm implementing new algorithm for 3d point estimation using images, and right now I'm trying to test it over 3d virtual models before I'll move to real objects.

The algorithm inputs are pixels before the last transformation to the viewport sizes, so to test the algorithm on rendered images, I need to know the reverse transformation from pixel in the shape([0,witdh],[0,height]).

I'm using perspective projection from the library pyrender to render the 2d images of 3d mesh, as far as I know this library using OpenGL methods for rendering.

example: I have a mesh of a box with the sizes=(3,1,5), center=(0,0,0) , and I have the Projection and View matrices

View Matrix=       [[ 0.96592583, -0.0669873 ,  0.25     ,  3.        ],
                   [ 0.        ,  0.96592583,  0.25881905,  4.        ],
                   [-0.25881905, -0.25      ,  0.9330127 , 10.        ],
                   [ 0.        ,  0.        ,  0.        ,  1.        ]]

Projection Matrix= [[ 2.4142135,  0.        ,  0.        ,  0.        ],
                   [ 0.        ,  2.41421356,  0.        ,  0.        ],
                   [ 0.        ,  0.        , -1.0010005 , -0.10005003],
                   [ 0.        ,  0.        , -1.        ,  0.        ]]

this is my calculation to map 3d point/vertex into pixel:

def map_to_pixel(point3d,w,h,projection,view):
    p=projection@view@point3d # openGL perspective projection
    p=p/p[3]                  # divide by the w element
    p[0]=w/2*p[0]+w/2         # transformation from [-1,1] -> [0,width]
    p[1]=h/2*p[1]+h/2         # transformation from [-1,1] -> [0,height]
    return p

test it on the vertex top-left-close of the box =[-1.5, 0.5, 2.5, 1. ] when viewport_sizes=(width,height)=(512,512)

results= [150.86775635, 4.28475523, 1.00894365, 1. ]= (151,4)

when the actual results by pyrender for this vertex is the pixel ~ (90,342)

if anyone knows the actual process behind the scenes of pyrender/OpenGL, or knows how to map the pixels correctly, that will be super helpful.

btw: I know my function uses bottom-left mapping when the library uses top-left mapping, but it still gives unexpected output.

  • 1
    The values given in this question do not make sense, the object will lie _behind_ the camera, and hence would be clipped by OpenGL, never reaching the viewport. You certainly cannot have rendered the object with that coordinates and (only) that transformations. So either you're missing something (like a model matrix?), or the values you supplied are just incorrect. – derhass May 13 '21 at 16:58
  • Those values are just an example, the calculation for the View matrix is correct and based on 6 parameters (3 for translation and 3 angles for rotation), and the projection matrix is given by the library pyrender, that uses OpenGL functions. – Michael Ben Amos May 14 '21 at 12:45

1 Answers1

0

I figured out how goes the calculation, I don't know why, but the library pyrender (that uses OpenGL methods), use the inverse matrix of the view matrix I set as an input.

the exact function to set a pixel is:

from numpy.linalg import inv

    def map_to_pixel(point3d,w,h,projection,view):
        p=projection@inv(view)@point3d.T
        p=p/p[3]
        p[0]=(w/2*p[0]+w/2)    #tranformation from [-1,1] ->[0,width]
        p[1]=h-(h/2*p[1]+h/2)  #tranformation from [-1,1] ->[0,height] (top-left image)
        return p

I have tested the rendered images across this function, and all the vertices have been mapped perfectly into the pixels.

  • The matrix, which is defined by the eye position and the line of sight, transforms from view space to world space. This is not the "view" matrix. The view matrix needs to transform from world space to view space. Therefore the "view" matrix is the inverse matrix, of the matrix the matrix defined by eye position and line of sight. This also explains the comment of "derhass", since the matrix in your question is the inverse view matrix. – Rabbid76 May 14 '21 at 13:01
  • 1
    That explains the issue well, thank you both for your kind help. – Michael Ben Amos May 14 '21 at 14:56