Forum Discussion
Ziggurat
12 years agoExplorer
Faster rendering of stereoscopic picture
Programming or even understanding well enough shaders is beyond me. But I was contemplating the possibility of having a shader that allowed for only rendering a picture once, and extrapolate real not stretched 3D images from this.
The reason why I thought about this, is because good graphics and high framreate is maybe even more important in VR than on a monitor based game. I'm not saying graphics trumps gameplay, but its at least nice to have good graphics and good framerate. And rendering a picture twice takes its toll on the framerate.
I google'ed and found this: http://fulldome.ning.com/forum/topics/stereoscopic-domemaster-images
Or am I mistaken?
The reason why I thought about this, is because good graphics and high framreate is maybe even more important in VR than on a monitor based game. I'm not saying graphics trumps gameplay, but its at least nice to have good graphics and good framerate. And rendering a picture twice takes its toll on the framerate.
I google'ed and found this: http://fulldome.ning.com/forum/topics/stereoscopic-domemaster-images
Or am I mistaken?
12 Replies
- geekmasterProtege
"ziggurat" wrote:
Programming or even understanding well enough shaders is beyond me. But I was contemplating the possibility of having a shader that allowed for only rendering a picture once, and extrapolate real not stretched 3D images from this.
The reason why I thought about this, is because good graphics and high framerate is maybe even more important in VR than on a monitor based game. I'm not saying graphics trumps gameplay, but its at least nice to have good graphics and good framerate. And rendering a picture twice takes its toll on the framerate.
I google'ed and found this: http://fulldome.ning.com/forum/topics/stereoscopic-domemaster-images
Or am I mistaken?
That linked page contains a dead link to "Daniel F. Ott Angular fisheye shader", which is here:
http://www.thedott.net/shaders/domeAFL/
Dan Ott's stuff is based on the same source material from Paul Burke that I was using as a source of ideas, and it is along the lines I have been thinking about for extending my "PTZ Tweening for low-power low-latency head-tracking" thread:"At
- ZigguratExplorerGreat that you found this useful geekmaster!
I think in the future we will have some more shortcuts when rendering stereoscopic 3D. - HarleyHonored GuestI think latency will be the most important factor for VR, as most other stuff will solve itself with faster GPU hardware.
Oculus Rift is only a development kit now, and the retail version looks to be at least a year away, and by that time you will be able to go out and purchase a new cheaper 'value' graphics controller that is faster than today's top-of-the-line.
Anyway checkout Timothree Besset (a.k.a. TTimo, formerly of id Software) es_core low latency rendering framework
http://ttimo.typepad.com/blog/2013/05/es_core-an-experimental-framework-for-low-latency-high-fps-multiplayer-games.html
Also recommend you read Massimiliano Di Luca thesis on a new method to measure end-to-end delay of virtual reality
http://kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/attachments/DiLuca_presence_6128%5B0%5D.pdfThe first step to try to reduce the end-to-end delay of a VR system is to understand where it is coming from. Here, the system will be divided in three major components.
Tracking - The tracker captures a geometric property in the physical world, like the position and orientation of the HMD or of the user’s hand.
Processing - The graphic system handles the interaction between the tracking data and the simulation of the laws of projection to produce the image of the virtual element.
Displaying - The image is displayed.
All three of these components contribute to the end-to-end delay.
Some say end-to-end latency needs to be less than 20ms for VR, others like Oculus VR say around 50ms is acceptable.
Regardless of the true latency you still want to render at 60fps or more to achieve good perceived motion smoothness, John Carmack and Michael Abrash therefor also argue for 120MHz+ display refresh, but it's really unrelated to latency.
John Carmack of id software on latency in VR
http://www.altdevblogaday.com/2013/02/22/latency-mitigation-strategies/
Michael Abrash of Valve software on latency in VR
http://blogs.valvesoftware.com/abrash/latency-the-sine-qua-non-of-ar-and-vr/
Timothree Besset (a.k.a. TTimo), formerly of id Software on latency in VR, also offers a rendering framework for it
http://ttimo.typepad.com/blog/2013/05/es_core-an-experimental-framework-for-low-latency-high-fps-multiplayer-games.html
Kyle Orland of Ars Technica on latency in VR
http://arstechnica.com/gaming/2013/01/how-fast-does-virtual-reality-have-to-be-to-look-like-actual-reality/
Other threads on latency in this forum
viewtopic.php?f=26&t=1880
viewtopic.php?f=20&t=1027
Also checkout the coming Oculus Latency Tester
https://www.oculus.com/pre-order/latency-tester/ - guyshermanHonored GuestAs you're probably well aware, one aspect of current GPUs, especially discrete GPUs connected over the PCI-Express bus, is that draw calls are relatively expensive, and so you really want to maximise the amount of geometry you rasterize for each draw call. On modern hardware with geometry shaders, it is possible to render the same geometry, from multiple different perspectives, into separate render-target views, with a single draw call. I haven't tried it yet, but I read about it in this book: http://www.amazon.com/Introduction-3D-Game-Programming-DirectX/dp/1936420228. The technique described is for rendering cubemaps, but you could presumably adapt it to drawing the two eye-views at the same time.
I guess it all depends on which stage of the graphics pipeline is your bottle-neck, if you're finding that you're bottle-necking at rasterization then this might not be for you but if you're bottle-necking at the bus, then this would probably help. - tlopesHonored Guest
"guysherman" wrote:
As you're probably well aware, one aspect of current GPUs, especially discrete GPUs connected over the PCI-Express bus, is that draw calls are relatively expensive, and so you really want to maximise the amount of geometry you rasterize for each draw call. On modern hardware with geometry shaders, it is possible to render the same geometry, from multiple different perspectives, into separate render-target views, with a single draw call. I haven't tried it yet, but I read about it in this book: http://www.amazon.com/Introduction-3D-Game-Programming-DirectX/dp/1936420228. The technique described is for rendering cubemaps, but you could presumably adapt it to drawing the two eye-views at the same time.
I guess it all depends on which stage of the graphics pipeline is your bottle-neck, if you're finding that you're bottle-necking at rasterization then this might not be for you but if you're bottle-necking at the bus, then this would probably help.
Carmack mentioned that he experimented with using the geometry shader in this way on his Twitter feed:
https://twitter.com/ID_AA_Carmack/status/268437638078402561 - Teddy00kHonored GuestI did some messing around with using Geometry Shaders and it does indeed seem to speed up the Driver calls without really slowing down the GPU.
I also noticed in the Oculus SDK, there was a version of this half implemented in RenderDevice::CreateStereoShader().
Has anyone tried getting this to work? I wonder why the Oculus team didn't put this into their demos? - Teddy0kExplorerI got this working in the Tuscany demo, pretty easy to do.
In that particular demo, it lost performance (from 220 fps to 140 fps). This kinda makes sense, this scene is quite low poly and simple, so adding GS will immediately be a bottleneck. I wonder if anyone has tried doing this in a more complex scene to see if it's a win? - spyroExpert ProtegeI also read some papers about this. Expected speedup should be quite dramatic (nearly 100%), especially on high poly scenes:
http://www.academia.edu/929449/Accelerated_stereoscopic_rendering_using_gpu
http://hal-upec-upem.archives-ouvertes.fr/docs/00/73/33/43/PDF/deSorbier_CGAT_2010.pdf
spyro - FredzExplorerUnfortunately the authors admitted themselves in their paper that this technique was not adapted at all to video games.
They were using glVertex functions which have a very low performance and have been deprecated in OpenGL 3.0 and removed in OpenGL 3.1 (current version is 4.4).
When they used Vertex Buffer Objects (VBO) their performance was worse than with standard multi-view rendering. - cyberealityGrand ChampionThat's disappointing. However, I wonder if the concepts could be applied to modern graphics APIs.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 1 year ago
- 1 year ago
- 2 years ago
- 2 years ago