Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
thewhiteambit's avatar
thewhiteambit
Adventurer
8 years ago

Why that nonsense in ovrLayerEyeFovDepth?

Each field in the ovrLayerEyeFovDepth struct is an array of [ovrEye_Count] - that is 2
Only exception is the "ovrTimewarpProjectionDesc ProjectionDesc"
ProjectionDesc has to be generated from the eye FOV.
Since the FOV is different for each eye, it makes no sense this is present only once!

I averaged both eyes FOV, zNear and zFar, but this makes no sense to be present only once.

3 Replies

  • volgaksoy's avatar
    volgaksoy
    Expert Protege
    When you say:
    ProjectionDesc has to be generated from the eye FOV.
    That isn't quite true. While we do indeed have a helper/utility function called ovrTimewarpProjectionDesc_FromProjection that takes in a ovrMatrix4f projection function, you can fill in the ovrTimewarpProjectionDesc struct manually using whatever means you feel necessary. You can look at the implementation of the ovrTimewarpProjectionDesc_FromProjection to get a good sense of what it's doing.

    The reason why the EyeFovDepth layer doesn't ask for two separate ProjectionDesc values (one for each eye) is because we expect each eye render to use the same depth mapping values, which are defined by the near & far clip planes and the direction of the depth increment vs. decrement. Note that none of those values require knowledge of the actual FOV of the projection. So the ProjectionDesc only uses values from the 4x4 projection matrix that affect depth calculations (i.e. 3rd and 4th columns). If you are using *different* depth clip planes for each eye, then I'd strongly recommend you avoid that. However if you have a valid use case for such an approach, we'd definitely love to know about it.

    Assuming you're not doing that, then in a nutshell, for populating the ProjectionDesc, you can use either eye's projection matrix when calling our ovrTimewarpProjectionDesc_FromProjection helper function. Taking an average of the two FOVs shouldn't be necessary, and either approach should generate exactly the same results.

    For reference, take a look at the OculusRoomTiny sample code ehich is also using an EyeFovDepth layer (in main.cpp) to see how we handle tackle this.
  • Don't be so picky on my words, when I try to give a simple breakdown. I know these values can be generated without the FOV by generating the values like you would do for a Projection Matrix.

    You really can't imagine a situation were zNear and zFar are different for each eye? What if the renderer calculates the needed zNear and zFar based on object distance per eye to maximize depth precision?