Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
SlowRiot's avatar
SlowRiot
Honored Guest
12 years ago

When there's no need to render twice

Conventional Rift engine design currently appears to focus on rendering the entire scene twice, once from each eye. The only alternative that seems to get discussed is stereo reprojection using a depth buffer, but that's universally agreed to be unsatisfactory.

But actually, there's no need to render your entire scene twice in every circumstance. Thanks to the Rift's display's low angular resolution, there's a fairly low distance threshold beyond which stereoscopic depth cues are no longer distinguishable. We can calculate this threshold distance thus:

  • Average interpupillary distance (IPD) ~= 0.064m

  • Best angular resolution = 1280px horizontal resolution / 90 degrees horizontal view angle = 14.22 pixels per degree

  • Smallest angular size of a pixel = 1 / 14.22 PPD = 0.07°

  • ∴ Distance at which stereoscopic effects are smaller than 1 pixel = 0.064 / tan(0.07°) ~= 52m


Additionally, as FredzL pointed out in IRC, distance stereopsis is not a primary depth cue at longer distances:


One technique to take advantage of stereo imagery being unnecessary beyond a given range would be a three-pass renderer:

  • Pass 1 renders the background from the centre camera with a near clipping plane of 60m, onto both eye render target textures

  • Pass 2 renders the left eye view from the left eye camera, with a far clipping plane of 60m, overlaid onto the left eye target texture

  • Pass 3 does the same for the right eye

  • Barrel distortion shader etc

The downside to this technique is that you're losing the benefit of the z-buffer cull, having to overdraw parts of your scene. However, this could be compensated in part by occlusion culling routines of other sorts - including portaling. For indoor scenes there would not be much benefit to this, however, as in most scenes that work well with portaling engines you've already got the vast majority of your rendered content in the foreground.

However, for scenes such as flight or space simulators (where your cockpit is the only thing within ~50m of you in normal gameplay, and therefore the only portion that needs to be rendered in stereo) this technique could make a huge difference to performance.

An alternative render order could be:
[list=2]
  • Passes 1 and 2 render left and right eye scenery with a far clipping plane of 60m, with the depth buffer rendered to an external render target for each eye

  • Pass 3 renders centre eye scenery with a near clipping plane of 60m, testing against a depth buffer written out from each eye in pass 1 and 2

  • Barrel distortion shader etc

  • I've not really thought much about this method yet, since it requires some way of substituting for having two independent z buffers. One way could be using the alpha buffer of the texture as a depth buffer for the depth test - but this would require some shader fiddling, and you lose transparency.

    What I'd like to know is - has anyone already tried to implement a similar technique, and if so, what did you discover? I find it hard to believe I could be the first person to try to implement such an obvious idea.

    11 Replies