Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
simonjacoby's avatar
simonjacoby
Honored Guest
13 years ago

Performance and experience improvement ideas

Hi,

I recently integrated support for OR in my DX9 engine, and after playing around with it for a couple of weeks I have two untested ideas I'd like to share. I'd just like some feedback on the feasibility of them, to see if they're worthwhile.

1. Barrel distortion optimization: Right now, having to apply the barrel distortion post effect means having to render the screen in a lot bigger resolution. Sure, there are things that you can do to alleviate this with shader optimizations, stenciling etc, but when it comes down to it, rendering bigger means eating more GPU bandwidth, and that should be avoided.

So my idea is, instead of applying the distortion as a post effect, on a DX11 GPU you should be able to apply the distortion using a vertex shader and then tessellating the geometry that is close to the camera instead. That way, you can render in the native 1280x800, and the additional triangles required to make the distortion look smooth will be generated on the GPU, which is fast and doesn't require any more memory bandwidth (will scale nicely with future GPUs). This can of course also be LOD:ed, so that stuff farther away is tesselated less. Ideally, it should be the screen size of the triangles that matter, rather that the distance.

Since my engine is DX9 I can't test this right now, but what do you think, does it seem feasible?

2. My other idea is more for the Oculus team than the developers. Personally, I think the screen door effect is pretty annoying. In order to do away with it, couldn't you add a small diffusion filter, like a small piece of transparent but slightly blurry glass or plastic that could just sit in front of the LCD screen. The diffusion should be homogenous and about the size of one pixel, so that the blur doesn't bleed over to neighboring pixels. That way, you can't see the internals of a single pixel because it's blurred out, but you still don't loose the "crispness" of the resolution that the LCD has.

3 Replies

  • Warp vertex shader threads:
    (with sample images): viewtopic.php?f=17&t=1341&p=15201#p15201
    (with shader code): viewtopic.php?f=20&t=353&p=6377#p6377

    Diffusion filters were tested (and there are also threads discussing that), but caused significant image degradation. The real solution is to use a higher resolution display, and these are only Dev Kits so it was better to leave a diffusion filter out of them. If you want a diffusion filter, some DIY projects used ordinary waxed paper for that, although paper vellum would probably work better.
  • Archy's avatar
    Archy
    Honored Guest
    About your first point: Using tesselation would be extremly complicated and most likely much slower. It too makes many standard effects more complicated like postprocessors, particle systems and gui.
    After reading your post I thought about deferred rendering. If you have a deferred rendering pipeline, you could do the geometry pass, apply the distortion and then do the lighting pass on the distorted buffer. That could heavily improve performance without loosing any quality. It would still require many adjustments to the rest of the engine.

    What I did in Dark is that I took 4 samples per pixel during warping. This leads to the less distorted parts of the image receive "free" supersampling which drastically reduced noise. This way I wasn't too sad about the additional resolution required as it was not lost.
  • @geekmaster: Good to know that it works :) I think I'll try it out myself, without tesselation just to see how it looks. About the diffusion filter, cool that you guys have tried it already. Too bad it didn't work out. I'm very curious to see how much better the HD helmet is. :)

    @Archy: Without having tried it, I doubt that it'll be much slower, unless you're geometry bound, which most engines aren't. It isn't the barrel distortion shader per se I want to optimize, it's the much larger render targets I have to have.

    For example, my distortion scale factor is around 1.71, which means I have to render at least ~3x (1.71^2) as many pixels. I actually render with a factor of 2 on width and height for simplicity. I have A16FB16FG16FR16F render targets for HDR rendering, and heavy post processing, so memory consumption and bandwidth is a real issue.

    Rendering just at 1.75x supersampling means you're really rendering at 2240x1400, and rendering at 2.0x supersampling means 2560x1600. This is without MSAA or any other goodies, just your default rendertargets. If you're using a deferred engine, this translates into HUGE memory and bandwidth requirements.

    Also, it isn't quite true that you get "4x SSAA for free", because that only applies to the pixels that are off center. The center pixels will be aliased, because of the barrel distortion (the whole point of rendering larger is to get the 1-1 pixel mapping of the center pixels). So the area that the player usually looks at will be the most aliased.

    Right now I render at 2560x1600, do tone mapping/color correction/post filters and then FXAA. After that I run the distortion shader which samples down to the final 1280x800 render target. I get nice quality, but horrible perf on older machines. My dev machine has a GeForce 460 GTX and it can just about run it at 60 fps with everything enabled. This is with the regular Oculus Rift. What will happen when you run the HD version (and beyond)? It will eat memory bandwidth like popcorn =)

    You do have a good point regarding the post processing though, it may be more complicated. I hadn't thought of that at all. Maybe it's possible to get around though. For example, DOF and motion blur post will probably be useless with the rift, so you'll probably not doing those anyway. You should be able to do color correction and screen space AA techniques on the vertex-warped image, without any special changes. The biggest issue would probably be with bloom/streak filters or similar.

    For those types of filters, maybe you could start with an inverse of the barrel distortion first (pinch filter). Then apply the steps as normal (downsample, blur , cutoff, etc). When you're done, you apply the regular barrel distortion filter, and then just composite that on your vertex warped image. Bloom and streak are usually rendered at a lot lower resolution than the regular RT anyway, so any image degradation because of distortion filters maybe OK. What do you think?