Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
TeraBit's avatar
TeraBit
Honored Guest
11 years ago

G-Warp

Hi All,

After doing the ray tracing demo, I wondered if any of the optimisation I used there could be applied to rasterization. In the ray tracer, I warped the rays I shot out, avoiding the need to do an oversized image and a post process warp.

This worked very well. In fact I'd be inclined to do a whole engine on RT at some point in the future (possibly based on a hybrid Rasterisation / RT thing, much like the recently announced Wizard GFX design from imagination tech).

This lead me to thinking on how I could avoid losing the 'warp the rays' advantage.

So my idea is to warp the G-Buffer instead. The theory goes like this, the oversized buffer is to accommodate the greater resolution in the middle of the warp, but ends up throwing away a lot of the extra detail around the edges. Since most of the processing power goes into shading, you warp, then shade, i.e:

1. Render to oversized G-Buffer
2. Warp to target sized G-Buffer
3. Shade the smaller target (ignoring the black borders outside the distortion).
4. Do image correction such as Chromatic, Fading etc.

If it works it should avoid some of the performance issues with rendering to the Rift, which will only get worse with higher framerates and higher resolutions.

Possible problems:
1. Screen space techniques may not work properly once warped. (That said, it wouldn't impact the hybrid RT solution since it doesn't use SS).
2. Warping the G-Buffer may take more steps than warping the final image would.

Any thoughts?

7 Replies

  • I have no feedback, but I'd like to see your findings if you implement this.
  • TeraBit's avatar
    TeraBit
    Honored Guest
    "cybereality" wrote:
    I have no feedback, but I'd like to see your findings if you implement this.


    I'm currently in the process of converting my little Raster based 3D engine to Deferred Shading. Once that's done I'll give it a go and let you know. I already implemented the Geometry Shader to do the Stereo in one pass, so am on a mission to see how efficient I can get the process to go. 8-)
  • Just be aware you will need a low latency because time warping a frame that is already barrel warped will be more difficult.
  • owenwp's avatar
    owenwp
    Expert Protege
    When you consider antialiasing, which will be quite complicated to do efficiently with this method, I don't think you end up saving any shading performance.

    Every pixel of the pre-distorted g-buffer will be contributing some color to the final image. So even if you avoid shading redundant MSAA samples you still need to run your lighting shaders at least the same number of times no matter when you do your distortion. And the complexity involved will be considerable because after distortion every pixel in your buffer contains an edge after re-sampling, even if you are only rendering one giant triangle. The fact that you need to use point sampling in your distortion shader because g-buffer samples cannot be interpolated will also make aliasing worse by itself. And chromatic aberration correction wouldn't be practical.
  • Unfortunately the lack of chromatic aberration kills a lot of these sorts of ideas. I don't remember how you solved it in the raytracer - did you trace a buffer at the "green" positions (but producing R,G,B pixels), then do a fixup post-process to add in the CA - a sort of double distortion? (still worth it because you avoid over-sampling the edges the way rasterisation does).
  • TeraBit's avatar
    TeraBit
    Honored Guest
    "tomf" wrote:
    Unfortunately the lack of chromatic aberration kills a lot of these sorts of ideas. I don't remember how you solved it in the raytracer - did you trace a buffer at the "green" positions (but producing R,G,B pixels), then do a fixup post-process to add in the CA - a sort of double distortion? (still worth it because you avoid over-sampling the edges the way rasterisation does).


    I think the 'proper' way to have done it would have been to trace the RGB as three separate rays with slightly different offsets, which models what actually happens to light when passing through the lens of the rift, but inversely.

    What I actually did was to Raytrace the warp by adjusting the Z parameter of the 'eye rays' as it moves out radially from the centre of each eye. That way it does everything in one shot (Stereo, Warp etc.) except the Chromatic aberration correction.

    Then once it's rendered to a texture, in a GLSL fragment shader, I track the distance from the centre of each eye again and then offset the RGB texture coordinates, much as the standard warp shader does.

    In the G-Warp idea, I was planning to do the same again.

    1) Render Big G-Buffer (Color, Normal etc.)
    2) Warp it to the target size
    3) Shade the smaller version
    4) Post Process as above to correct for Chromatic Aberration

    Problem is I keep fiddling with the raytracer trying to reduce aliasing on the foveal rendering, and haven't gotten around to trying out the above. It may fall in a heap! But seems worth a try. :D