Forum Discussion
jherico
12 years agoAdventurer
Has anyone tried single-pass stereo rendering?
OpenGL provides a mechanism to render the same output multiple times to different viewports via geometry shaders. Once of the examples in the red book actually shows multiple different views of a single 3D model. It seems like it should be possible to adapt this technique to perform both eye rendering passes concurrently.
You'll still have the same amount of rasterization and fragment processing, but you'll only have to do the vertex processing once. It seems like this would be beneficial if your bottleneck is somehow in the vertex processing, although from my reading that's usually tantamount to saying your bottleneck is in the OpenGL call layer because you're using draw calls inefficiently.
In cases where your bottleneck is the fragment shader, this clearly wouldn't help and may actually hurt due to the inclusion of an otherwise unnecessary geometry shader, but of course the SDK already provides a reasonable mechanism for dynamically changing the effective offscreen texture resolution.
- Set up distortion rendering to use a single offscreen texture
- Set up the viewports to each target half of the texture
- Add a geometry shader which uses instancing or geometry amplification to effectively fork the incoming vertex processing for the two views
- Apply the projection matrices and per-eye translation offset matrices in the geometry shader
You'll still have the same amount of rasterization and fragment processing, but you'll only have to do the vertex processing once. It seems like this would be beneficial if your bottleneck is somehow in the vertex processing, although from my reading that's usually tantamount to saying your bottleneck is in the OpenGL call layer because you're using draw calls inefficiently.
In cases where your bottleneck is the fragment shader, this clearly wouldn't help and may actually hurt due to the inclusion of an otherwise unnecessary geometry shader, but of course the SDK already provides a reasonable mechanism for dynamically changing the effective offscreen texture resolution.
7 Replies
- PathogenDavidHonored GuestI've thought of trying something like this, but I couldn't really find enough information to convince me that it might be faster to justify spending time on it. As you said, it probably only helps if your bottleneck is vertices, which isn't super common.
I'm still interested to know some real results though, so maybe I will try it one of these days. (Or maybe someone with more time than me will :) )
One related idea that I am interested in trying also is using MRTs to re-use fragment shader outputs in both eyes if possible. Either that or using a technique similar to timewarp (but with translation) to make the other eye and then fill in the missing parts in a second pass. I can't decide if this would make the other eye strange looking or not. I imagine it could, but it'd be interesting to try. - jhericoAdventurerOn a similar topic, I'm also interested in seeing how much work it would take to create an examples of using an initial pass to render the 'far' content of a scene into a offscreen render target and then use a portion of the texture as the background for each eye for all content that's too distant for stereopsis.
- cyberealityGrand ChampionYeah, I remember people talking about doing the far scene as a separate 2D render and it would seem like that could help. Not sure anyone ever tried it but it seems plausible.
- n00854180tExplorer
"cybereality" wrote:
Yeah, I remember people talking about doing the far scene as a separate 2D render and it would seem like that could help. Not sure anyone ever tried it but it seems plausible.
I've discussed this sort of a technique with other developers before (not in the context of VR, but same idea) and it seems like the biggest problem is getting the 2D "far" scene to match up with the 3D close up scene where they meet, without having depth fighting issues or other artifacting. - jhericoAdventurer
"n00854180t" wrote:
"cybereality" wrote:
Yeah, I remember people talking about doing the far scene as a separate 2D render and it would seem like that could help. Not sure anyone ever tried it but it seems plausible.
I've discussed this sort of a technique with other developers before (not in the context of VR, but same idea) and it seems like the biggest problem is getting the 2D "far" scene to match up with the 3D close up scene where they meet, without having depth fighting issues or other artifacting.
If every item in the scene can have a collision box, then it shouldn't be hard to do the '2D' pass on every object where the collision doesn't have any component closer than say 10 meters from the user. At that range, I'd think the per-eye displacement should be less than a pixel. Your per-eye pass would render everything that wasn't rendered in the first pass. I don't think you'd end up with any sort of artifacts or popping. The trick I think is that it's not sufficient to simply render the color content and copy it into the background of your per-eye scene. You have to blit a portion of the 'both eyes 2d render' depth buffer into the per-eye depth buffer as well.
I'll see if I can pull together a demo of something this week. - 2EyeGuyAdventurer
"jherico" wrote:
OpenGL provides a mechanism to render the same output multiple times to different viewports via geometry shaders. Once of the examples in the red book actually shows multiple different views of a single 3D model. It seems like it should be possible to adapt this technique to perform both eye rendering passes concurrently.
I wanted to try that. But I got discouraged and instead just swap render targets after each draw call and render again (which halves the number of swaps compared to rendering explicitly to left then right each time). - owenwpExpert ProtegeI did the geometry shader approach in a toy engine project a long while back, it worked pretty well. Just having a geometry shader introduces considerable GPU overhead, but if you do a lot of vertex processing it is a solid win. Its generally more resource efficient too. Far fewer state changes and you can often discard dynamic buffers more freely, such as a shadow buffer that you would normally need to hold on to between eye rendering passes (or render all over again in the case of Unity).
Short of doing that, I bet it would be a pretty good win to just change from
foreach(eye in eyes)
{
foreach(renderer in scene)
{
SetState(renderer);
renderer.Render(eye);
}
}
to
foreach(renderer in scene)
{
SetState(renderer);
foreach(eye in eyes)
renderer.Render(eye);
}
Looping at the top level of your render pipeline is not that efficient. Issuing draw calls with little state change is fast. There should be plenty of tricks to make it even faster, especially with some of these new low level APIs.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 11 months ago
- 5 months ago
- 3 years ago