Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
Noxy's avatar
Noxy
Honored Guest
10 years ago

Screen flashed when changing RenderScale from >1.0 to 1.0 at runtime.

In my app, there's some scenario where I want to dynamically increase RenderScale to 1.5 to improve visual quality (on GearVR). This works perfectly for me. However, changing RenderScale back to 1.0 sometime causes the screen to flash (/w random image).

Have anyone seen this issue? I'm wondering whether this is a bug in RenderScale API or just simply unsupported scenario?

Thanks,
Nox

6 Replies

Replies have been turned off for this discussion
  • Anytime you change VRSettings.renderScale, it re-allocates the eye buffers, which can lead to dropped frames. But this ought to work without flashing a garbage image. It sounds like there may be a race condition between the re-allocation and the next buffer swap after it. We'll investigate. If you are varying the resolution on a frame-by-frame basis, you should use 5.4's VRSettings.renderViewportScale instead. It allocates only the largest necessary buffers, but uses a smaller viewport within them to save fill rate and bandwidth.
  • Noxy's avatar
    Noxy
    Honored Guest
    Thanks Dave, where can I find more info about the differences between  renderScale and renderViewportScale?
    I noticed that renderViewportScale can be 0 to 1, what's the default value? (I assumed 1)

    Per you note above, I could set renderScale to 2 (my max), then dynamically set renderViewportScale down to 0.5 in scene (or scenario) that I don't need the max level of sharpness. Is that correct?
  • where can I find more info about the differences between  renderScale and renderViewportScale?
    See https://docs.unity3d.com/540/Documentation/ScriptReference/VR.VRSettings-renderViewportScale.html and https://docs.unity3d.com/540/Documentation/ScriptReference/VR.VRSettings-renderScale.html.

    what's the default value?

    Yep, it's 1. That means the entire texture will be used for rendering. A renderViewportScale of 0.5 would use an 0.5x0.5 square inside the texture, resulting in a 75% pixel throughput savings. A renderScale of 0.5 would result in the same savings, but it would force a re-allocation of the eye buffers instead of using a smaller viewport within a large buffer.

     I could set renderScale to 2 (my max), then dynamically set renderViewportScale down to 0.5 in scene (or scenario) that I don't need the max level of sharpness.

    Yes, renderScale * renderViewportScale would cancel out to 1 in that case, so the sharpness would be the same as 1, 1, but there would be a larger memory footprint.

  • Noxy's avatar
    Noxy
    Honored Guest
    On more question, for the later scenario (renderScale:2, renderViewportScale:0.5), do I also pay performance penalty twice? (e.g. perf implication from setting non-default renderScale + perf implication from setting  non-default renderViewportScale)
  • There are 2 different costs to consider here:
    1) The number of pixels being rendered and texels being read. This is related to renderScale * renderViewportScale. So 2 * 0.5 = 1 should have no impact on this.
    2) The memory footprint. This is controlled by renderScale and has nothing to do with renderViewportScale.
  • Noxy's avatar
    Noxy
    Honored Guest
    Sounds good. In our case, increased mem footprint should worth the improved visual quality.

    Thanks again!