I have a problem about matrix transformation in OpenVR api.
m_compositor->WaitGetPoses(m_rTrackedDevicePose, vr::k_unMaxTrackedDeviceCount, nullptr, 0);
in the demo which the openvr gives:
const Matrix4 & matDeviceToTracking = m_rmat4DevicePose[ unTrackedDevice ];
Matrix4 matMVP = GetCurrentViewProjectionMatrix( nEye ) * matDeviceToTracking;
glUniformMatrix4fv( m_nRenderModelMatrixLocation, 1, GL_FALSE, matMVP.get() );
where GetCurrentViewProjectionMatrix is calculated with
Matrix4 CMainApplication::GetCurrentViewProjectionMatrix( vr::Hmd_Eye nEye )
{
Matrix4 matMVP;
if( nEye == vr::Eye_Left )
{
matMVP = m_mat4ProjectionLeft * m_mat4eyePosLeft * m_mat4HMDPose;
}
else if( nEye == vr::Eye_Right )
{
matMVP = m_mat4ProjectionRight * m_mat4eyePosRight * m_mat4HMDPose;
}
return matMVP;
}
the question is:
1, which space is matDeviceToTracking transformed from to which space?
2, If I have modelview matrix already, and already can rotate with hmd, how can I render the rendermodel correctly? I tried using projection*modelview*m_rmat4DevicePose[ unTrackedDevice ]
but there is no effect.