cancel
Showing results for 
Search instead for 
Did you mean: 

stereo seperation and stereo convergence with the Unity camera and a custom renderer

zboson
Superstar
I have created a distributed ray tracer using a shader in Unity. I would like to show this in stereo and be able to navigate with my Oculus Rfit.

What I have done so far is create two identical quads which I assign to the LeftAnchor and the RightAnchor in the OVRCameraRig. I add a material to each quad which uses the same shader.  I create a left and right layer and assign the culling mask for each eye to each layer. I made sure the quads are large enough that they fill the FOV when I put the HMD on.  In my shader I set the cameras to be +- 0.0325m apart along the right camera axis.  This gives me stereo and it looks quite awesome with my real time ray tracer.

However, I'm not sure if I am doing things ideally.  I notice that when I enable VR support in player settings that the Unity Camera ads options stereoSeperation and stereoConvergence. How are these options used in Unity?

Is am a bit surprised and confused by the stereoConvergence option. Currently my left and right cameras have the same axis e.g. forward points in the same direction so I don't think they ever converge.  The default stereoConvergence in unity is 10 which I think means they converge at 10 meters?  Is this really what Unity does?  Should I be implementing convergence in my shader for a better stereo effect? I would imagine that with 64mm IPD that a convergence at 10m would have no significant effect.  I perhaps naively assume that the left and right eye could be treated as two cameras with the same axis with just a different position so they never converge.  How can any convergence be assumed without eye tracking?

If anyone has any other suggestion on how I should implement my ray tracing shader for stereo in Unity I would be grateful for your comments.

Here is my main code assigned to the OVRCameraRig for navigation and setting up the shader each frame.

Start
        //left and right are the names of quads assigned to the left and right eye
        rendl = GameObject.Find ("left").GetComponent<Renderer> ();

        rendr = GameObject.Find ("right").GetComponent<Renderer> ();



       







Update
        //I use the LeftEyeAnchor for now because the Camera is the same as the RightEyeAnchor as far as I know
        Transform tran = GameObject.Find ("LeftEyeAnchor").transform;

        //left hand to right hand camera transforms ?

        tran.Rotate (new Vector3 (0, 180, 0));

        tran.localRotation = new Quaternion (-tran.localRotation.x, tran.localRotation.y, tran.localRotation.z, -tran.localRotation.w);



        Vector4 up = GameObject.Find ("LeftEyeAnchor").transform.up;

        Vector4 right = GameObject.Find ("LeftEyeAnchor").transform.right;

        Vector4 forward = GameObject.Find ("LeftEyeAnchor").transform.forward;

        Vector4 position = GameObject.Find ("LeftEyeAnchor").transform.position;



        //Debug.Log (up + " " + right + " " + forward + " " + position);

        rendl.material.SetVector("_vec4up", up);

        rendr.material.SetVector("_vec4up", up);

        rendl.material.SetVector("_vec4right", right);

        rendr.material.SetVector("_vec4right", right);

        rendl.material.SetVector("_vec4forward", forward);

        rendr.material.SetVector("_vec4forward", forward);



        float x = Input.GetAxis ("Horizontal");

        float y = Input.GetAxis ("Vertical");



        float speed = 0.01f;

        this.transform.position -= GameObject.Find ("LeftEyeAnchor").transform.forward * y * speed;

        this.transform.position += GameObject.Find ("LeftEyeAnchor").transform.right * x * speed;

        rendl.material.SetVector("_vec4o", this.transform.position);

        rendr.material.SetVector("_vec4o", this.transform.position);




Here is a screen shot from the scene view

y76tqd5my682.png


6 REPLIES 6

zboson
Superstar
Okay, I'll try the Unity forums next.

vrdaveb
Oculus Staff
I notice that when I enable VR support in player settings that the Unity Camera ads options stereoSeperation and stereoConvergenceHow are these options used in Unity?

These are old options that apply to stereo TVs, not VR (at least, not Rift or Gear VR). The position, orientation, and FOV of each eye needs to closely match the actual values for the user to avoid discomfort. The idea behind "stereo convergence" was to toe the stereo frusta in to match the eye gaze directions, which is actually misguided and causes inconsistent skew at the edges of the projection plane. The correct way to render to a planar stereo display is to point the frusta parallel to each other.

zboson
Superstar
@vrdaveb

Thank you for your reply. I did not see it until now.

I should probably ask this as a separate question but is there a way to get the IPD used by the Oculus software in Unity.  Currently I hardcode a IPD into my ray tracer.using the values +- 0.0325m.

vrdaveb
Oculus Staff
The simplest way is probably to use
    Vector3.Distance(UnityEngine.VR.InputTracking.GetLocalPosition(UnityEngine.VR.VRNode.LeftEye), UnityEngine.VR.InputTracking.GetLocalPosition(UnityEngine.VR.VRNode.RightEye)).

If you are using OVRCameraRig, you can get the scaled value using
    Vector3.Distance(rig.leftEyeAnchor.position, rig.rightEyeAnchor.position)

Anonymous
Not applicable
Does anyone know how to do this on the Oculus Quest? Since it's rendering in a single pass, it doesn't work with the lefteyeanchor / righteyeanchor of the OVRCameraRig. I already set it to "Use Per Eye Cameras".
Thanks for your help!


cecarlsen
Honored Guest

It's an old thread, but it seems it has still not been addressed.

 

When getting the Camera.stereoSeperation property of the center eye camera in the OVRCameraRig it returns 0.022. When you measure the distance between left and right eye anchor you get something like 0.0683. Why does the OVRCameraRig not update the camera values? It seems very clumsy to measure every update in case the headset was physically adjusted.