Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
Vrally's avatar
Vrally
Protege
11 years ago

GetTrackingState().HeadPose.ThePose or GetEyePose()?

I find the documentation a bit unclear about when to use which.

Could someone explain when I am supposed to use the pose acquired from:

ovrTrackingState ts = ovrHmd_GetTrackingState(hmd, frameTiming.ScanoutMidpointSeconds);
if (ts.StatusFlags & (ovrStatus_OrientationTracked | ovrStatus_PositionTracked)) {
ovrPoseStatef headpose = ts.HeadPose;
ovrPosef pose = headpose.ThePose;
}


And cases when its better to use the pose from:


ovrPosef headPose[2];
for (int eyeIndex = 0; eyeIndex < ovrEye_Count; ++eyeIndex) {
ovrEyeType eye = hmd->EyeRenderOrder[eyeIndex];
headPose[eye] = ovrHmd_GetEyePose(hmd, eye);
}


What is the difference between the two poses acquired?

2 Replies

  • GetEyePose should be used in the per-eye rendering loop. It automatically accounts for latency and finds the best prediction delta to use when fetching the pose.

    GetTrackingState() should be used when

    • You're not in the rendering loop and need the pose for something

    • You need one of the additional pieces of information that ovrTrackingState provides over ovrPose


    And example of the first item might be during your update function, you want to calculate what the user is looking at so. Perhaps they're looking at a menu and staring at an item long enough triggers it. This kind of calculation doesn't belong in the rendering loop, but it does depend on the pose so you can determine what the user is staring at.

    An example of the second item would be if you wanted to do some sort of gestural support, by detecting head nods or shakes. In this case you need more than the pose, you want to get the acceleration and rotation rate of the head, which isn't available in an ovrPose. You would probably do this mostly outside of the rendering loop too, or even in another thread, but there might be situations where you want to change how you render things based on how fast the user's head is moving. In this case, you'd need to call GetTrackingState().

    Whether and however you use GetTrackingState(), you should still use GetEyePose() for determining the pose for the purposes of rendering a given eye, since that's what it's there for.
  • Thank you for the explanation.

    But the SDK docs mentions on page 39-40 that ovrHmd_GetTrackingState(hmd, frameTiming.ScanoutMidpointSeconds) should be used for accurate poses when rendering with a multi threaded renderer. But then in combination with ovrHmd_BeginFrame(hmd, frameIndex), ovrHmd_EndFrame(hmd, pose, eyeTexture) in your render loop and calling ovrHmd_GetFrameTiming(hmd, frameIndex) and ovrHmd_GetTrackingState(...) in the main loop. Piping the poses found in the main loop to the renderer.

    But since I am doing client distortion rendering I guess I have to use ovrHmd_BeginFrameTiming(hmd, frameIndex) and ovrHmd_EndFrameTiming(hmd) instead. But the thing I find a bit strange with this setup is that I now only have one pose (instead of one per eye) to send as input to the ovrHmd_GetEyeTimewarpMatrices(...) function.