Forum Discussion
jherico
11 years agoAdventurer
SDK-side distortion, OpenGL & glFinish()/glFlush()
I've been brushing up on my GL skills, and I was wondering about the flow through the GL distortion pipeline in the SDK. I'm fundamentally unhappy with the idea of glFlush()/glFinish() being used for...
vrdaveb
11 years agoOculus Staff
Thanks, jherico. Sync objects would indeed tell us when commands retired with fewer pipeline stalls. Unfortunately, without glWaitSync(..), they would make it more difficult to schedule distortion rendering right before the vblank.
The only reason we flush and wait is that it lets us sample the actual orientation and adjust the projection immediately before scan-out. It's true that this adds a pipeline bubble of a couple of milliseconds between your rendering and our distortion. But it reduces head motion-to-photons latency, which your brain is very sensitive to. Currently, there's too much error in poses that were predicted even just one frame ahead. We try to make the rendered image reflect actual data that is sensed just a millisecond or two before the display begins to show it.
The only reason you can't start rendering the next frame until after we perform distortion is that we are sharing a single context. You may recall that we're working toward an asynchronous timewarp technique, which performs distortion from a separate thread and context. Once that is available, you'll be able to call ovrHmd_EndFrame(..) and begin rendering the next frame without the stalls that you see today. Sync objects may be helpful there.
That being said, how much GPU time is the flush currently burning for your app? Once you have rendered the left and right eye textures, you could potentially get started on the next frame, as long as you call ovrHmd_EndFrame(..) several milliseconds before the next retrace. That's in the ballpark of 9ms after the previous call returned. You couldn't make extra eye begin/end calls, but you could set up state and even do some rendering with a camera pose of your own sampling.
"jherico" wrote:
It seems like one of the fundamental problems you're trying to solve is to ensure that when the next frame starts you have a decent idea of when it's going to be rendered, and thus can provide the most accurate head pose information.
The only reason we flush and wait is that it lets us sample the actual orientation and adjust the projection immediately before scan-out. It's true that this adds a pipeline bubble of a couple of milliseconds between your rendering and our distortion. But it reduces head motion-to-photons latency, which your brain is very sensitive to. Currently, there's too much error in poses that were predicted even just one frame ahead. We try to make the rendered image reflect actual data that is sensed just a millisecond or two before the display begins to show it.
The only reason you can't start rendering the next frame until after we perform distortion is that we are sharing a single context. You may recall that we're working toward an asynchronous timewarp technique, which performs distortion from a separate thread and context. Once that is available, you'll be able to call ovrHmd_EndFrame(..) and begin rendering the next frame without the stalls that you see today. Sync objects may be helpful there.
That being said, how much GPU time is the flush currently burning for your app? Once you have rendered the left and right eye textures, you could potentially get started on the next frame, as long as you call ovrHmd_EndFrame(..) several milliseconds before the next retrace. That's in the ballpark of 9ms after the previous call returned. You couldn't make extra eye begin/end calls, but you could set up state and even do some rendering with a camera pose of your own sampling.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 1 year ago
- 3 months ago
- 12 years ago