Forum Discussion
rsjtaylor
11 years agoHonored Guest
Async transfers and timewarp
I'm using client distortion rendering in OpenGL and doing some texture streaming at regular, but lower then framerate intervals.
I'd like to just use buffer objects and let it do its asynchronous thing, but for timewarp there's an explicit pipeline stall each frame to make sure the render timings are correct. (via glFlush() glFinish() in the SDK renderer).
This means my texture uploads are forced to complete within a frame, which in turn hobbles frame rate.
So, I see two possible solutions:
1) Use an implicit sync on the eye texture rendering - I think I could do this by doing a glFlush() and then reading back a pixel from the rendered texture into client memory. Hopefully this will ensure the render is queued and then block until the render finishes without affecting by buffer objects, but I'm not 100% sure.
2) Use two contexts with shared objects, then do the texture streaming in one and the rendering in another. This would ensure the explicit sync in one context doesn't affect the streaming in the other (I think)
Has anyone tried either of these approaches? Would you suggest one over the other? The first seems simpler, avoids the overhead of context switching & the platform specific annoyance of sharing between contexts, but it also seems a little dirty.
I'd like to just use buffer objects and let it do its asynchronous thing, but for timewarp there's an explicit pipeline stall each frame to make sure the render timings are correct. (via glFlush() glFinish() in the SDK renderer).
This means my texture uploads are forced to complete within a frame, which in turn hobbles frame rate.
So, I see two possible solutions:
1) Use an implicit sync on the eye texture rendering - I think I could do this by doing a glFlush() and then reading back a pixel from the rendered texture into client memory. Hopefully this will ensure the render is queued and then block until the render finishes without affecting by buffer objects, but I'm not 100% sure.
2) Use two contexts with shared objects, then do the texture streaming in one and the rendering in another. This would ensure the explicit sync in one context doesn't affect the streaming in the other (I think)
Has anyone tried either of these approaches? Would you suggest one over the other? The first seems simpler, avoids the overhead of context switching & the platform specific annoyance of sharing between contexts, but it also seems a little dirty.
No RepliesBe the first to reply
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 8 months ago
- 2 years ago
- 8 months ago
- 3 months ago