Forum Discussion
DiCon
12 years agoHonored Guest
Deferred shading and OR distortion shader
Introduction - irrelevant for the topic
Hi everybody, I have been lurking for quite a while and now decided to post for the first time. I am a physicist from Germany who develops OpenGL software as a hobby. However, I would not mind if 3D graphics became a part of my work as well and since I will have to look for a job soon (just got my PhD), I created an OpenGL demo (no commercial engine, just basic OpenGL) to showcase my capabilities. This demo is now on the verge of becoming a full game, although I am not expecting to become a game developer (I will probably develop engineering software - after all, I am a physicist).
So, I am developing this OpenGL game and have created a defered shader pipeline. I have read several times, that I should render at a higher resolution than the native screen resolution of the OR as not only some pixels are lost from the distortion, but some will (obviously) be stretched over multiple pixels of the distorted picture.
Is it possible (and a good idea) to render the geometry at a higher resolution, apply the distortion and then apply the (deferred) shading to the distorted image?
By doing this you would only have to use the higher resolution in the first step and could even omit the shading in the dark edges of the transformed image. Unfortunately I cannot just try as I am waiting for my DK2 like everybody else... imho, with the original SDK this would be quite a bad idea as you want to do the transform at the very last moment to reduce latency, but with the recent time warp feature, you could still do a final correction at the very end of the pipeline. So it would be (geometry)->(full transform)->(shading)->(time warp transform).
Hi everybody, I have been lurking for quite a while and now decided to post for the first time. I am a physicist from Germany who develops OpenGL software as a hobby. However, I would not mind if 3D graphics became a part of my work as well and since I will have to look for a job soon (just got my PhD), I created an OpenGL demo (no commercial engine, just basic OpenGL) to showcase my capabilities. This demo is now on the verge of becoming a full game, although I am not expecting to become a game developer (I will probably develop engineering software - after all, I am a physicist).
So, I am developing this OpenGL game and have created a defered shader pipeline. I have read several times, that I should render at a higher resolution than the native screen resolution of the OR as not only some pixels are lost from the distortion, but some will (obviously) be stretched over multiple pixels of the distorted picture.
Is it possible (and a good idea) to render the geometry at a higher resolution, apply the distortion and then apply the (deferred) shading to the distorted image?
By doing this you would only have to use the higher resolution in the first step and could even omit the shading in the dark edges of the transformed image. Unfortunately I cannot just try as I am waiting for my DK2 like everybody else... imho, with the original SDK this would be quite a bad idea as you want to do the transform at the very last moment to reduce latency, but with the recent time warp feature, you could still do a final correction at the very end of the pipeline. So it would be (geometry)->(full transform)->(shading)->(time warp transform).
3 Replies
- renderingpipeliHonored GuestYou can do that but I wouldn't recommend it:
With SDK 0.3 and later it's better to get the SDK do the distortion so it would be the last step in this scenario. Being the last step, the time warp is applied here, performing an expensive second deferred shading pass after this kills the benefit from time warp. Complex deferred techniques wont work if you want to store any non-blendable data in your G-buffer, you can't do the distortion first as you don't want to do nearest filtering. Often you want to perform post processing after the second deferred step and this again does not work well with the SDK provided distortion / time warp. - DiConHonored GuestI see. I also have to admit, that I haven't thought of the post-processing at all. Indeed, I have some final screen-space effects, which would look horribly distorted if they were to be applied after the transform...
Thanks. - owenwpExpert ProtegeSomeone suggested this before, the biggest problem is that antialiasing would be almost impossible. You cannot interpolate at all between G-buffer samples, so your distortion would have to be done with point sampling. The only way to get a clean image out of that would be to significantly increase the resolution throughout the pipeline and downsample to native resolution after shading. At that point you are not really saving much if any performance, and the added complexity will cost you on top of that.
The general takeaway is no matter what you do, every G-buffer sample will contribute something to the color of the output pixel, so there are no easy shortcuts to avoid shading all those samples.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 4 months ago
- 2 years ago
- 3 years ago