2

I have a problem about matrix transformation in OpenVR api.

m_compositor->WaitGetPoses(m_rTrackedDevicePose, vr::k_unMaxTrackedDeviceCount, nullptr, 0);

in the demo which the openvr gives:

const Matrix4 & matDeviceToTracking = m_rmat4DevicePose[ unTrackedDevice ];
        Matrix4 matMVP = GetCurrentViewProjectionMatrix( nEye ) * matDeviceToTracking;
        glUniformMatrix4fv( m_nRenderModelMatrixLocation, 1, GL_FALSE, matMVP.get() );

where GetCurrentViewProjectionMatrix is calculated with

Matrix4 CMainApplication::GetCurrentViewProjectionMatrix( vr::Hmd_Eye nEye )
{
    Matrix4 matMVP;
    if( nEye == vr::Eye_Left )
    {
        matMVP = m_mat4ProjectionLeft * m_mat4eyePosLeft * m_mat4HMDPose;
    }
    else if( nEye == vr::Eye_Right )
    {
        matMVP = m_mat4ProjectionRight * m_mat4eyePosRight *  m_mat4HMDPose;
    }

    return matMVP;
}

the question is:

1, which space is matDeviceToTracking transformed from to which space?

2, If I have modelview matrix already, and already can rotate with hmd, how can I render the rendermodel correctly? I tried using projection*modelview*m_rmat4DevicePose[ unTrackedDevice ] but there is no effect.

Krzysztof Bociurko
  • 4,575
  • 2
  • 26
  • 44

1 Answers1

2

1.

In the sample code, the matDeviceToTracking is a reference to m_rmat4DevicePose[unTrackedDevice], which is copied from TrackedDevicePose_t::mDeviceToAbsoluteTracking. This is a model matrix mapping from the model space to the world space.

There is one pitfall, though. If you included the UpdateHMDMatrixPose() function from the sample, this function inverts m_rmat4DevicePose[vr::k_unTrackedDeviceIndex_Hmd] while updating the value of m_mat4HMDPose, leaving m_rmat4DevicePose[0] mapping from the world space to the model/HMD view space, exactly the other way around to the other matrices in the array.

2.

If you already have the model-view matrix, then you only need to multiply the projection matrix by it to obtain the MVP matrix. For rendering into the HMD, use m_mat4ProjectionLeft * m_mat4eyePosLeft * modelview and m_mat4ProjectionRight * m_mat4eyePosRight * modelview for left and right eye, respectively. For rendering on a monitor, you can generate your own frustum and multiply it by your model-view matrix. The following website is a good reference on how to create a projection matrix: http://www.songho.ca/opengl/gl_projectionmatrix.html

sien
  • 83
  • 1
  • 6
  • Hi, thanks for your answer. First before openvr integration I already have a camera matrix from world to view. So the integration task is to support hmd rotation in view matrix; the second task is to support rendering the two controllers in the game. – user2240897 Aug 15 '16 at 02:12
  • @user2240897 In this case, typically you use the position component of your camera matrix as the origin of your HMD space. For rendering the controllers, simply refer to the sample code. – sien Aug 15 '16 at 17:28