Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
dshankar's avatar
dshankar
Honored Guest
11 years ago

My Quick Guide to Preparing your DK2 app for Oculus CV1

Hey VR Devs!

With CV1 coming soon (months, not years!) I wrote this blog post to help you

1. determine how well your DK2 VR application will perform on the CV1
2. some optimization tricks to help you hit 90fps

Read the full blog post:
http://dshankar.svbtle.com/preparing-yo ... s-rift-cv1

Reddit discussion: https://www.reddit.com/r/oculus/comment ... plication/

TL;DR you can simulate CV1 performance by increasing render scale to use a higher resolution render target eye texture.

Any feedback/criticisms/ideas welcome!

Thanks,
Darshan

3 Replies

  • galopin's avatar
    galopin
    Heroic Explorer
    If i follow the logic, you start increasing the resolution because the cv1 has a higher resolution, then you suggest to lower it back as a quick workaround when an application cannot afford it :)

    I am not working on to be retail VR application but if i would, then first thing i implement is real time dynamic resolution scaling. No need to sacrifice pixel density for 100% of the game when only 15-20% of it may not achieve it ! It also often happen on very short moment, like a big smoke explosion or whatever, to lower the resolution in that case is unlikely to be visible.

    When DX12 will be standard and available properly to the Oculus SDK, memory aliasing is possible, keep an history of frame duration, and when you are close to failure, alias to a smaller resolution until you detect a lighter situation.

    For Dx11, memory aliasing is not possible and allocating a lot of different resolution is out of the way, so the only option is to render into a sub viewport in the render target, and extra care for existing post process to rescale UVs when sampling and prevent reading from outside the valid region.
  • Yep. Increase the resolution to simulate the amount of pixels/second you need to render. It does ignore some possible bottlenecks like rendering latency, but it's one way of testing.

    You can already dynamically adjust render target resolution, at least in Unity.

    You set a larger render scale, and simply reduce it when you start a compute-intensive scene. When you reduce the render scale, Unity culls the viewport and renders a smaller frame. You can then jump back to the larger resolution at any time.