Forum Discussion
vaspoul
13 years agoHonored Guest
Convergence
Hi,
I've been reading the Oculus SDK documentation and the following caught my eye :
"Unlike stereo TVs, rendering inside of the Rift does not require off-axis or asymmetric projection."
Looking at the projection matrix formulation (essentially a pre and a post translation transform) it doesn't look much
different to what I'd use for normal 'TV' stereo, e.g. like what is described here :
http://developer.download.nvidia.com/as ... o_Z-SG.pdf
I appreciate that the derivation is different due to the fact that there is a separate, physically translated screen for each eye, rather than a single screen. Is there any more to it than that?
Curiously, there is also no mention of convergence, or focal depth, i.e. the depth at which there is no separation between the left/right images. How is that accounted for?
It will probably all make sense once I get my kit and start experimenting! :)
Thanks,
Vassilis
I've been reading the Oculus SDK documentation and the following caught my eye :
"Unlike stereo TVs, rendering inside of the Rift does not require off-axis or asymmetric projection."
Looking at the projection matrix formulation (essentially a pre and a post translation transform) it doesn't look much
different to what I'd use for normal 'TV' stereo, e.g. like what is described here :
http://developer.download.nvidia.com/as ... o_Z-SG.pdf
I appreciate that the derivation is different due to the fact that there is a separate, physically translated screen for each eye, rather than a single screen. Is there any more to it than that?
Curiously, there is also no mention of convergence, or focal depth, i.e. the depth at which there is no separation between the left/right images. How is that accounted for?
It will probably all make sense once I get my kit and start experimenting! :)
Thanks,
Vassilis
8 Replies
- tbowrenHonored GuestThe left and right eye rays are parallel in the 3d scene. In "real life" your eyes would only do this when looking at the horizon. Without eye tracking there is no way to know what to converge the eyes. One reason for the projection matrix adjustment is because the 7 inch screen is too big for each eye to look at the center of it's "half" of the screen.
Before I got the SDK I played with all sorts eye ray vector convergence tests and I had all sorts of problems like you are mentioning. Once I made them parallel and offset each by 32 mm everything looked great. - vaspoulHonored GuestTrue enough, knowing where to converge (i.e. 'point the eyes') is an interesting problem in a simulated environment, but it's something you routinely have to deal with in games. For our games (in my professional life) I've chosen to converge the eyes on where the player character is (we tend to have a 3rd person/overhead camera). This felt the most natural for our content (I'm not suggesting this is a general rule). Anything between the player and the camera had negative parallax (i.e. in front of the virtual screen) and anything beyond the player has positive (i.e. behind the virtual screen). Making the eyes parallel would be (like you say) like focusing on the horizon, which would put the entire scene in negative parallax, which would be very uncomfortable. The matrix in the SDK looks like it's missing a *convergenceDepth on the 41 (or 14) element, which means that it's focusing 1 length unit in front of you.
anyway, I'm doing all this 'blind' so I could just be talking rubbish! :)
I'm in the process of adding anaglyph support on the world & tiny demo to keep myself busy until my kit arrives (in UK so expecting along wait!).
Oh well, I'm off to the beach now :) - edziebaHonored GuestBecause the user's eyes are doing all the actual converging, the cameras should be parallel (i.e. converged on infinity), and spaced at the IPD (unless you are doing some effect with avatar size). If you try and converge the cameras, you are essentially rotating the entire world in opposite directions for each of the users eyes. Artificially converging the cameras is an artefact of the fact that stereo 3D on a flat display has to deal with a small FoV and borders to the image, and marry that with the desire to cut between scenes and use artful shots. Doing this sort of thing in a HMD is a highway to nausea city, and maintaining orthostereo is preferred.
- KuraIthysHonored GuestHmmm...
Well, that answers another tangentially related question then... - vaspoulHonored Guestok, I've worked out the maths now and derivation for Oculus is identical to that in the nvidia paper. The main difference being that on TVs there is single screen and on HMD there is one per eye, but that just amounts to a difference to the fixed (i.e. not depth-dependent) offset.
My confusion arose from the fact that I've always treated the convergence depth as something dynamic / scene dependent, but it's actually supposed to be the distance between the eyes and the screen.
All is well again! :) - owenwpExpert ProtegeIf you were to think of it in terms of 3D TVs, it would be infinitely far away and infinitely wide so as to fill your field of view, and any object rendered to be closer than infinity would be "popping out" from the screen with negative parallax. Nothing ever has positive parallax because nothing can be farther away than infinity.
Obviously such a TV is impossible to build, so real ones need to converge at an arbitrary distance that generally will not match up with the distance of the viewer, especially due to low FOV. HMDs have no such design constraints as they can show a completely independent image to each eye, and can collimate light for infinite focus. - YamHonored GuestThere is a 3rd method ...
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 12 years ago
- 11 years ago
- 13 years ago
- 13 years agoAnonymous