Here they explain: "Before Unity can submit the first rendering command that depends on the view transformation matrix, it must first get the view matrix from the VR SDK. To keep latency as low as possible, the VR SDK predicts the head transform twice per frame: One prediction to render the current frame. This prediction corresponds with where your head actually is, in real space, when the frame arrives on the screen. One prediction to simulate the following frame. Unity applies the rendering prediction for the current frame to cameras, controllers, and anything that needs information for Scene rendering. Unity uses the simulation prediction for the following frame if it is unable to render the following frame."
In my case, I use Unity 2022.2.11 with the OpenXR plugin 1.6.0, so I assume the VR SDK will be OpenXR. I target Meta Quest 2, but I explicitly don't want to use the Oculus XR Plugin.
With OVR I could easily know for how much time in the future the tracking pose was predicted via HeadPose.PredictionInSeconds
. Even though reading a lot of the OpenXR specification and its Unity documentation, I could not find a clear answer to that.
I would like to know
- When exactly is the prediction applied during the Unity lifecycle? Are the tracking values different if I gather them in Update() or after a WaitForEndOfFrame()?
- How do I know for which time in the future this pose was predicted for? Is it the remaining time until the next frame is about to render? How can I find out the predicted remaining time until the frame is rendered?
- How can I access a pose prediction at a custom timestamp in the future myself?
[TL;DR] I would like to know for which timestamp in the future the tracking pose was predicted for and how I can get my own prediction for a custom timestamp in the future.