Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
Proton's avatar
Proton
Honored Guest
12 years ago

0.4.1 Start & End Timing Question

In OVRCamera.cs
	IEnumerator CallbackCoroutine()
{
while (true)
{
OVRDevice.HMD = Hmd.GetHmd();
#if UNITY_EDITOR_WIN || (!UNITY_EDITOR_OSX && UNITY_STANDALONE_OSX)
yield return new WaitForEndOfFrame();
#else
yield return null;
#endif
OnCoroutine();
}
}

Referencing the flowchart at the bottom of the Unity Execution Order of Event Functions.

If I add Debug statements, then in the editor it has BeginFrame called first, then EndFrame called second. But in a windows build EndFrame is called first, then BeginFrame gets called later that frame. Is this something that we should worry about, perhaps some Unity issue you guys are working around?

7 Replies

Replies have been turned off for this discussion
  • draxov's avatar
    draxov
    Honored Guest
    From what I know about unity, I beleive "yield return null" will give the same result as "new WaitForEndOfFrame" as the execution order document on the Unity API says:

    yield The coroutine will continue after all Update functions have been called on the next frame.


    So the start frame is called on PreCull of the camera, while incrementing an eye count twice to two (one pre cull for each eye) the at the end of the frame after all the updates the coroutine executes twice from each camera and decrements the eye count, so it know when both eyes have finished rendering then calls the end frame event.

    That's as much as I can pick up from the code, I don't think the coroutine should execute before the the first camera render as it's meant to update at the end of a frame. Maybe an official response might reveal more.

    Though I do have some questions about the execution order of event, like why can't you use PreCull and PostRender rather than a coroutine? the only thing is you'd have to set eye count before rendering, but you can disable the cameras and manually call Camera.Render() to control this more. Just a thought for the Oculus guys.
  • Proton's avatar
    Proton
    Honored Guest
    The coroutine is after the Update functions, but before the Scene rendering:


    I added a few debug statements:
    void OnPreCull()
    ...
    if (PendingEyeCount == 0)
    {
    GL.IssuePluginEvent((int)EventType.BeginFrame);
    Debug.Log( string.Format("Frame:{0} CameraDepth:{1} Time:{2} {3}",Time.frameCount, camera.depth, Hmd.GetTimeInSeconds(), "BeginFrame") );
    }

    void OnCoroutine()
    ...
    if (PendingEyeCount == 0) {
    GL.IssuePluginEvent((int)EventType.EndFrame);
    Debug.Log( string.Format("Frame:{0} CameraDepth:{1} Time:{2} {3}",Time.frameCount, camera.depth, Hmd.GetTimeInSeconds(), "EndFrame") );
    }

    void OnPostRender() {
    Debug.Log(string.Format("Frame:{0} CameraDepth:{1} Time:{2} {3}", Time.frameCount, camera.depth, Hmd.GetTimeInSeconds(), "OnPostRender"));
    }


    If I run Tuscany in the editor, I get:

    Looks good.

    When I run a windows build:

    EndFrame is called before BeginFrame

    I think the vsync sleep happens between OnPostRender and EndFrame, not sure how to verify that though. Perhaps this is the expected behavior, but it seems odd.
  • draxov's avatar
    draxov
    Honored Guest
    In a build, it uses "yield return null" instead of "yield return new WaitForEndOfFrame()", which I think means end frame will be called first as the execution order diagram shows it's position before rendering starts. Odd decision, I wonder if it's a work around for something build-wise.
  • ryahata's avatar
    ryahata
    Honored Guest
    @Proton: Have you tried to change it to see what happens? I noticed this block of code and was perplexed by it as well. I assumed there must have been a reason for this. If I have some free time I'll try it and let you all know what happens.

    What also confuses me is that this behavior is flipped for Mac OS users...

    If anyone can shed some light as to why these choices were made that would be awesome.
  • Is this causing an actual issue for you? This code here interfaces with our internal native plugin, which performs lens correction using the C++ SDK. Specifically, it initiates ovrHmd_BeginFrame(..), ovrHmd_GetTrackingState(..), and ovrHmd_EndFrame(..) calls. There are differences due to varying Unity behaviors on different platforms. But we always send the per-eye RenderTextures to the plugin after rendering and image effects are done. It should be safe to ignore this code as you use OVRDevice, OVR.Hmd, and Unity's rendering API.
  • ryahata's avatar
    ryahata
    Honored Guest
    Hey vrdaveb. Thanks for the quick response.

    "vrdaveb" wrote:
    Is this causing an actual issue for you?

    I can't speak for Proton and draxov but I'm just curious for my own edification.
    "vrdaveb" wrote:
    There are differences due to varying Unity behaviors on different platforms.

    Can you explain to us what those differences are?

    Thanks again for all your time and help.
  • Proton's avatar
    Proton
    Honored Guest
    "vrdaveb" wrote:
    Is this causing an actual issue for you?

    Possibly. The DK2 is making me dizzy after 5 minutes in situations where DK1 was fine (with vsync off). I'm just looking for anything that could be causing latency.

    "vrdaveb" wrote:
    This code here interfaces with our internal native plugin, which performs lens correction using the C++ SDK. Specifically, it initiates ovrHmd_BeginFrame(..), ovrHmd_GetTrackingState(..), and ovrHmd_EndFrame(..) calls.

    In the Oculus_Developer_Guide.pdf 8.2.3 Frame rendering:
    As suggested by their names, calls to ovrHmd_BeginFrame and ovrHmd_EndFrame enclose the body
    of the frame rendering loop. ovrHmd_BeginFrame is called at the beginning of the frame, returning frame
    timing information in the ovrFrameTiming struct. Values within this structure are useful for animation
    and correct sensor pose prediction. ovrHmd_EndFrame should be called at the end of the frame, in the
    same place that you would typically call Present. This function takes care of the distortion rendering,
    buffer swap, and GPU


    And 8.2.4 Frame timing
    Accurate frame and sensor timing are required for accurate head motion prediction which is essential for a
    good VR experience. Prediction requires knowing exactly when in the future the current frame will appear
    on the screen. If we know both sensor and display scanout times, we can predict the future head pose and improve image stability. Miscomputing these values can lead to under or over-prediction, degrading perceived
    latency and potentially causing overshoot “wobbles”.
    ...
    Render frame timing is managed at a low level by two functions: ovrHmd_BeginFrameTiming and
    ovrHmd_EndFrameTiming. ovrHmd_BeginFrameTiming should be called at the beginning of the
    frame, and returns a set of timing values for the frame. ovrHmd_EndFrameTiming implements most
    of the actual frame vsync tracking logic. It must be called at the end of the frame after swap buffers and
    GPU Sync. With SDK Distortion Rendering, ovrHmd_BeginFrame and ovrHmd_EndFrame call the
    timing functions internally, and so these do not need to be called explicitly.

    So my concern would be if putting the EndFrame at the start of the frame (possibly after the VSync) is causing prediction or time warp problems.


    "ryahata" wrote:
    @Proton: Have you tried to change it to see what happens?

    On a Windows build, you get black flickering when using WaitForEndOfFrame in DX9. DX11 it seems okay. Can't say I notice any difference in latency though.