Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
AbdulVR's avatar
AbdulVR
Honored Guest
11 years ago

Distortion mesh gaps

Hello,
I was wondering what kind of voodoo I need to apply to the distortion mesh coordinates to get it to fill the entire visible area.
This is what I have so far:
http://i61.tinypic.com/350a104.png
The only special thing I am doing per eye is remapping the distortion mesh NDC coordinates to the full [-1, 1] range.
Abdul

19 Replies

  • AbdulVR's avatar
    AbdulVR
    Honored Guest
    "cybereality" wrote:
    Internally, we are still doing tests around this technique, but initial experiments did not yield much performance gain.


    In one taxing situation I got a little over 5% improvement which is not ground-breaking but free gains are free right?! :)
  • AbdulVR's avatar
    AbdulVR
    Honored Guest
    "kojack" wrote:
    I've been planning on trying out the same technique, but with the stencil buffer instead of the depth buffer.
    At 1080p for the pre-distortion buffer it might not have much of a saving, but at the higher recommended res the saving would increase.

    The shape used as the blocking volume isn't based on the positions of the distortion mesh. Instead you'd use the uv coords of the distortion mesh (probably the blue channel, since iirc it stretches further due to chromatic aberration) as positions of the blocking mesh. That should find the region of the pre-distortion eye buffer that is actually visible post distortion.


    Hmm let me try the distortion uvs to see if I can close the gaps!
  • AbdulVR's avatar
    AbdulVR
    Honored Guest
    "AbdulVR" wrote:
    "kojack" wrote:
    I've been planning on trying out the same technique, but with the stencil buffer instead of the depth buffer.
    At 1080p for the pre-distortion buffer it might not have much of a saving, but at the higher recommended res the saving would increase.

    The shape used as the blocking volume isn't based on the positions of the distortion mesh. Instead you'd use the uv coords of the distortion mesh (probably the blue channel, since iirc it stretches further due to chromatic aberration) as positions of the blocking mesh. That should find the region of the pre-distortion eye buffer that is actually visible post distortion.


    Hmm let me try the distortion uvs to see if I can close the gaps!


    Alright I tried all three sets of uvs (they are already in NDC space per-eye which is noyce!) and red seems to fit my needs perfectly!
    Blue might be a bit of an overkill according to my tests.
    Thanks everyone :)
    Abdul
  • AbdulVR's avatar
    AbdulVR
    Honored Guest
    This is weird: while using the TanEyeAngles worked on the DK2 with the red channel providing the most conservative yet correct values, CB is way off across all the channels!
    Click
    Any help from the Oculus team would be great.
    Abdul
  • AbdulVR's avatar
    AbdulVR
    Honored Guest
    Bumpty bump bump!
    To the oculus devs here's my question:
    Given the distortion meshes, how do I draw them pre-distortion aka in the world such as when distortion is finally applied they completely fill the visible area?
    Thank you,
    Abdul
  • "AbdulVR" wrote:
    Bumpty bump bump!
    To the oculus devs here's my question:
    Given the distortion meshes, how do I draw them pre-distortion aka in the world such as when distortion is finally applied they completely fill the visible area?
    Thank you,
    Abdul


    You will want to modify the eyeFov value passed into ovrHmd_ConfigureRendering() and adjust your fov accordingly.
  • Actually, both the current Oculus approach and the Valve proposal strike me as a little wonky.

    Oculus creates an un-stenciled offscreen framebuffer that exactly matches the aspect ratio of of the screen half you're looking at because... reasons? It's fairly obvious and easy to compute, but in theory the offscreen buffer could have any size and aspect ratio you wanted.

    Valve came along and said:

    Hey, the distorted view has corners on it, but there are no corners in real vision, so why don't we shave that off (along with the stuff in the middle that can't be seen either because its occluded by the viewport). That results in fewer pixels rendered and presumably increased performance. Yay!


    But the approach they took was to simply assume that a circular field of view would be the most appropriate. But a real human's field of vision is not a regular geometric shape:



    A proper stencil map should be shaped based on what a person would actually be able to perceive within the Rift while accounting for the limitations of the lenses and screen as well as accommodating performance requirements. I suspect that the stencil could actually shave off more of the upper interior portions of the view (upper left sector on this diagram) while at the same time could allow more content in the upper and lower exteriors.
  • The eye-socket FOV model is simply an average and deviates from person to person. It would be asking a lot of users to sculpt a customized stencil for their face. The rounded FOV boundaries are based on the round lens since it generally determines the edge of the viewable region. On the DK1, this circular shape was more prevalent since it was more lens limited that screen limited. The DK2 is generally screen limited and so this boundary is often clipped to the screen edge to avoid rendering off-screen graphics. But the resulting barrel shape does offer one area for potential improvement since it introduces black corners that may be visible.
  • AbdulVR's avatar
    AbdulVR
    Honored Guest
    "brantlew" wrote:
    "AbdulVR" wrote:
    Bumpty bump bump!
    To the oculus devs here's my question:
    Given the distortion meshes, how do I draw them pre-distortion aka in the world such as when distortion is finally applied they completely fill the visible area?
    Thank you,
    Abdul


    You will want to modify the eyeFov value passed into ovrHmd_ConfigureRendering() and adjust your fov accordingly.


    Can I bother you with a request for some code sample?
    Thanks,
    Abdul