stereo seperation and stereo convergence with the Unity camera and a custom renderer
I have created a distributed ray tracer using a shader in Unity. I would like to show this in stereo and be able to navigate with my Oculus Rfit. What I have done so far is create two identical quads which I assign to the LeftAnchor and the RightAnchor in the OVRCameraRig. I add a material to each quad which uses the same shader. I create a left and right layer and assign the culling mask for each eye to each layer. I made sure the quads are large enough that they fill the FOV when I put the HMD on. In my shader I set the cameras to be +- 0.0325m apart along the right camera axis. This gives me stereo and it looks quite awesome with my real time ray tracer. However, I'm not sure if I am doing things ideally. I notice that when I enable VR support in player settings that the Unity Camera ads options stereoSeperation and stereoConvergence. How are these options used in Unity? Is am a bit surprised and confused by the stereoConvergence option. Currently my left and right cameras have the same axis e.g. forward points in the same direction so I don't think they ever converge. The default stereoConvergence in unity is 10 which I think means they converge at 10 meters? Is this really what Unity does? Should I be implementing convergence in my shader for a better stereo effect? I would imagine that with 64mm IPD that a convergence at 10m would have no significant effect. I perhaps naively assume that the left and right eye could be treated as two cameras with the same axis with just a different position so they never converge. How can any convergence be assumed without eye tracking? If anyone has any other suggestion on how I should implement my ray tracing shader for stereo in Unity I would be grateful for your comments. Here is my main code assigned to the OVRCameraRig for navigation and setting up the shader each frame. Start //left and right are the names of quads assigned to the left and right eye rendl = GameObject.Find ("left").GetComponent<Renderer> (); rendr = GameObject.Find ("right").GetComponent<Renderer> (); Update //I use the LeftEyeAnchor for now because the Camera is the same as the RightEyeAnchor as far as I know Transform tran = GameObject.Find ("LeftEyeAnchor").transform; //left hand to right hand camera transforms ? tran.Rotate (new Vector3 (0, 180, 0)); tran.localRotation = new Quaternion (-tran.localRotation.x, tran.localRotation.y, tran.localRotation.z, -tran.localRotation.w); Vector4 up = GameObject.Find ("LeftEyeAnchor").transform.up; Vector4 right = GameObject.Find ("LeftEyeAnchor").transform.right; Vector4 forward = GameObject.Find ("LeftEyeAnchor").transform.forward; Vector4 position = GameObject.Find ("LeftEyeAnchor").transform.position; //Debug.Log (up + " " + right + " " + forward + " " + position); rendl.material.SetVector("_vec4up", up); rendr.material.SetVector("_vec4up", up); rendl.material.SetVector("_vec4right", right); rendr.material.SetVector("_vec4right", right); rendl.material.SetVector("_vec4forward", forward); rendr.material.SetVector("_vec4forward", forward); float x = Input.GetAxis ("Horizontal"); float y = Input.GetAxis ("Vertical"); float speed = 0.01f; this.transform.position -= GameObject.Find ("LeftEyeAnchor").transform.forward * y * speed; this.transform.position += GameObject.Find ("LeftEyeAnchor").transform.right * x * speed; rendl.material.SetVector("_vec4o", this.transform.position); rendr.material.SetVector("_vec4o", this.transform.position); Here is a screen shot from the scene view8.1KViews0likes6Comments