Forum Discussion
B_old
10 years agoHonored Guest
Re-using old frames (when slow)
Our application is generally struggling to maintain (even close to) 75 fps, at least partly due to the data sets which we cannot further optimize.
An idea has been proposed to decouple our rendering from feeding the oculus in a different thread and just pretending to hit 75 fps while in reality re-using old frames until a new one is available. The theory is that this will somehow feel smoother. Could this actually work? I guess it could give the OVR a chance to apply its motion prediction magic more frequently although the image technically is still the same. But maybe this is already done internally?
Do you think this is worth a shot?
An idea has been proposed to decouple our rendering from feeding the oculus in a different thread and just pretending to hit 75 fps while in reality re-using old frames until a new one is available. The theory is that this will somehow feel smoother. Could this actually work? I guess it could give the OVR a chance to apply its motion prediction magic more frequently although the image technically is still the same. But maybe this is already done internally?
Do you think this is worth a shot?
11 Replies
- jhericoAdventurer
"B_old" wrote:
Let's make a more extreme example. Suppose I can only generate 1 unique frame per second. Will I get 75Hz timewarp if only call SubmitFrame at 1Hz?
No. Currently you must call Submit at 75 Hz, even if you're calling it repeatedly with the same frame, in order to get the effect you're looking for.
In certain conditions you can do a form of asynchronous timewarp on the client side, but how well it works depends on the workload you're placing on the GPU. The basic mechanism is to do your rendering to an offscreen buffer on one thread (your producer thread), and then as you complete frames pass them to another thread (your consumer thread) for submission to the SDK.
The consumer thread basically does nothing but wait for frames and perform submits to the CPU, while the producer thread creates frames as fast as it can.
However, this can still be problematic. OpenGL / Direct3D don't have mechanisms to prioritize one context over another. I believe that both nVidia and AMD are working on extensions that will allow this kind of prioritization (and this may even be implemented in SDK 0.7), but even when they do, typically they can't interrupt a given draw command.
So what happens is that on your consumer thread you're submitting a given texture to be distorted, timewarped and displayed, but when you issue that command, if the consumer thread's GPU commands don't get executed in a timely fashion, then you can still get frame drops and judder. My book originally contained a section in the performance chapter and an example on async-timewarp, but I dropped it because of this problem. While the example code worked just fine, because it was only simulating a heavy GPU load with sleeps, when I tried to put the technique to use in my Shadertoy VR app, it didn't work, because the shaders that really needed it were consuming the GPU resources with a single gigantic draw call.
It's possible that the technique will work for you, but it depends largely on where your bottleneck is. If you're CPU bound and the GPU has lots of idle headroom, then it will probably work. If you're GPU bound then it might work if your rendering is broken up into enough discrete draw calls. If you're GPU bound and you're using a few very heavy draw calls, then it probably won't do you any good.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 6 years ago