Forum Discussion
faultymoose
13 years agoHonored Guest
Calculating the Infinite Focal Plane ("skydome")
Forgive me if this has already been solved, but I spent quite a bit of time trying to hunt down information, and ended up running some tests of my own (sans my Rift Devkit as yet). While I'm sure they're not entirely accurate without the hardware to validate my findings, I wanted to share my results and have them reviewed by the community.
I believe the tests are conceptually sound, but feel free to point out any errors I've made.
http://www.booncotter.com/some-preliminary-vr-metrics/
The short of it:
Sources used for the calculations are listed on my blog.
I believe the tests are conceptually sound, but feel free to point out any errors I've made.
http://www.booncotter.com/some-preliminary-vr-metrics/
The short of it:
- IRL infinite focal plane == ~745m
- Rift devkit resolution infinite focal plane == ~250m - 300m
Sources used for the calculations are listed on my blog.
11 Replies
- ftarnogolExpert ProtegeAs I said in your blog; great work, Boon. And thank you for sharing the results
- faultymooseHonored GuestYou're very welcome! I was chasing the information mostly out of an academic interest myself, but it can probably be useful in circumstances where a player will be focusing at distance for the majority of gameplay - such as a space shooter - by populating the foreground 0 - 300m range with small particles, debris, and other elements to enhance the sense of depth.
Also, in regards to optimisation, an environment sphere could represent anything outside the ~300m depth mark (at the Rift's current devkit resolution), with the following caveats:- Any space in which the player can't move around a lot would probably benefit from significantly distant geometry, relative to the degree of player movement possible. Parallax at distances beyond ~300m is an important depth cue so environment spheres wouldn't work so well if the player has a moderate degree of positional freedom.
- Any textured environment dome would need to very closely match fidelity, contrast, perspective, etc. with the real-time environment, again to help the brain process depth from 2D information
- drashHeroic ExplorerThat's amazing... this is much much further than I had heard previously (something like 150 feet in real life). Very good to know. Thanks for posting!
- CapyvaraExplorerTheoretically, if you render the skydome at the same position for both eyes it will be perceived at infinity.
In Unity for example, you can make two domes, each one follow the position of one eye, use the culling masks to isolate the rendering.
Or do the perspective offset in the distortion shader and use stock unity sky box. - faultymooseHonored Guest
"Capyvara" wrote:
Theoretically, if you render the skydome at the same position for both eyes it will be perceived at infinity.
In Unity for example, you can make two domes, each one follow the position of one eye, use the culling masks to isolate the rendering.
Or do the perspective offset in the distortion shader and use stock unity sky box.
True, but "infinity" according to our brain, is at a fixed distance not all that far away. You don't, for example, see clouds or distant mountains or stars in stereoscopic 3D - they all appear at a monographic, equidistant point, because at some point the subtle variations in each eye's signal meet the resolution threshhold of the eye.
The same thing happens in rendering, but at an apparent closer distance because the resolution of the Rift causes the skydomes to converge at a nearer point then they appear to IRL.
I believe this is correct, but my methodology for calculating that "infinity" point could be wrong. If you can demonstrate my methodology is incorrect I'd be really interested, because there seems to be very little written on the topic! - CapyvaraExplorerI don't think your methodology of calculating it is wrong, I just think in practical terms is more safer to just force the objects you want to appear at "infinity" to have the same render for both eyes.
Of course its good to know how far you will can perceive the differences, so developers can properly populate their scenes. - ftarnogolExpert Protegemakes sense
- faultymooseHonored Guest
"Capyvara" wrote:
I don't think your methodology of calculating it is wrong, I just think in practical terms is more safer to just force the objects you want to appear at "infinity" to have the same render for both eyes.
Of course its good to know how far you will can perceive the differences, so developers can properly populate their scenes.
Oh okay, I get you! Yeah totally :)
This was all just academic, I was interested in the answer and it bothered me that I couldn't find it, so hopefully it does come in useful to somebody! - tomfExplorerTurning pedant mode on for a bit (and this really is a very minor point). Instead of this:
So, according to my calculations, the starry sky appears to be 744.85 meters away for anyone who has perfect visual acuity.
...I'd instead say that an object 745m away is indistinguishable from something at infinity. Since the number of things actually at 745m away is fairly limited, I think the right thing is to say the brain assumes everything there or beyond are all at infinity.
In practice, beyond a moderate distance the brain stops using eye vergence to determine depth and instead falls back to parallax. So you move your head sideways and the relative motions of objects tells you the depth. You can easily move your head a lot more than the vergence baseline of 64mm, so this technique is useful at massive distances - it is fairly simple to tell the difference between an object at 1000m and one beyond it. Ideally most Rift games should at least use the "head-on-a-stick" neck model, which gives the player a significant amount of parallax. - nhooblerHonored Guest
"tomf" wrote:
In practice, beyond a moderate distance the brain stops using eye vergence to determine depth and instead falls back to parallax. So you move your head sideways and the relative motions of objects tells you the depth. You can easily move your head a lot more than the vergence baseline of 64mm, so this technique is useful at massive distances - it is fairly simple to tell the difference between an object at 1000m and one beyond it. Ideally most Rift games should at least use the "head-on-a-stick" neck model, which gives the player a significant amount of parallax.
Even beyond that, your brain will stop trying to "sense" depth directly, and simply infer it from context and domain knowledge. You don't only have parallax from your own motion, you also have the relative motion of distant objects, and your meat-brain's accumulated understanding about how the world works and thus how to interpret what it's seeing as depth. It's why we've had effective "depth" effects in video games via parallax motion back to the days of the NES, and why matte painted backdrops are so surprisingly effective.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 7 years ago
- 2 years ago
- 10 months ago
- 3 months ago