Any upcoming headsets with microOLED, 120hz, and IPD adjustable?
I really want to try microOLED but the options are BSB no IPD adjustment. AVP is 3.5k with a locked system : D Currently quest 3 is the only good option that also has a reasonable weight. Meta bring microOLED+passthrough. BSB put IPD adjustment ffs. Samsung bring open alternative for AVP.561Views1like0Commentsstereo seperation and stereo convergence with the Unity camera and a custom renderer
I have created a distributed ray tracer using a shader in Unity. I would like to show this in stereo and be able to navigate with my Oculus Rfit. What I have done so far is create two identical quads which I assign to the LeftAnchor and the RightAnchor in the OVRCameraRig. I add a material to each quad which uses the same shader. I create a left and right layer and assign the culling mask for each eye to each layer. I made sure the quads are large enough that they fill the FOV when I put the HMD on. In my shader I set the cameras to be +- 0.0325m apart along the right camera axis. This gives me stereo and it looks quite awesome with my real time ray tracer. However, I'm not sure if I am doing things ideally. I notice that when I enable VR support in player settings that the Unity Camera ads options stereoSeperation and stereoConvergence. How are these options used in Unity? Is am a bit surprised and confused by the stereoConvergence option. Currently my left and right cameras have the same axis e.g. forward points in the same direction so I don't think they ever converge. The default stereoConvergence in unity is 10 which I think means they converge at 10 meters? Is this really what Unity does? Should I be implementing convergence in my shader for a better stereo effect? I would imagine that with 64mm IPD that a convergence at 10m would have no significant effect. I perhaps naively assume that the left and right eye could be treated as two cameras with the same axis with just a different position so they never converge. How can any convergence be assumed without eye tracking? If anyone has any other suggestion on how I should implement my ray tracing shader for stereo in Unity I would be grateful for your comments. Here is my main code assigned to the OVRCameraRig for navigation and setting up the shader each frame. Start //left and right are the names of quads assigned to the left and right eye rendl = GameObject.Find ("left").GetComponent<Renderer> (); rendr = GameObject.Find ("right").GetComponent<Renderer> (); Update //I use the LeftEyeAnchor for now because the Camera is the same as the RightEyeAnchor as far as I know Transform tran = GameObject.Find ("LeftEyeAnchor").transform; //left hand to right hand camera transforms ? tran.Rotate (new Vector3 (0, 180, 0)); tran.localRotation = new Quaternion (-tran.localRotation.x, tran.localRotation.y, tran.localRotation.z, -tran.localRotation.w); Vector4 up = GameObject.Find ("LeftEyeAnchor").transform.up; Vector4 right = GameObject.Find ("LeftEyeAnchor").transform.right; Vector4 forward = GameObject.Find ("LeftEyeAnchor").transform.forward; Vector4 position = GameObject.Find ("LeftEyeAnchor").transform.position; //Debug.Log (up + " " + right + " " + forward + " " + position); rendl.material.SetVector("_vec4up", up); rendr.material.SetVector("_vec4up", up); rendl.material.SetVector("_vec4right", right); rendr.material.SetVector("_vec4right", right); rendl.material.SetVector("_vec4forward", forward); rendr.material.SetVector("_vec4forward", forward); float x = Input.GetAxis ("Horizontal"); float y = Input.GetAxis ("Vertical"); float speed = 0.01f; this.transform.position -= GameObject.Find ("LeftEyeAnchor").transform.forward * y * speed; this.transform.position += GameObject.Find ("LeftEyeAnchor").transform.right * x * speed; rendl.material.SetVector("_vec4o", this.transform.position); rendr.material.SetVector("_vec4o", this.transform.position); Here is a screen shot from the scene view8.1KViews0likes6CommentsProper use of "get user profile" data. Height, IPD, etc...? (Blueprint)
Hello everyone, The first thing is to enable Oculus plugin "Online Subsystem Oculus" And then use a "Get user profile" , break it and print those. But... nope. It returns "true" but the values are still default (Unknow, 0, 0.1 etc...) What am I missing? Thanks1.8KViews0likes7CommentsProblem with focus when rendering monoscopic (IPD = 0) after SDK 1.3.x
Hello, could somebody help me here? My app was rendering with IPD = 0 before 1.3.x. It works pretty fine. I DO NOT shift my cameras according to hmdToEyeViewOffset for each eye. And it renders pretty fine until 0.8.0. After upgrading my app to 1.3.x. I had no more focus on my image. The SDK does a distortion for me that separates the image on each eye. I assume this happens because the SDK wants to consider the IPD. That my rendering process IS NOT taking care (again, both cameras are in the same position). It's just like rendering in mono, however, I need render each eye separated (because my CONTENT will be stereoscopic). It should not matter, in fact. I am using 1 texture half for each eye, and rendering the SAME THING on each side. Is there a way for me to tell the SDK the I WANT IPD = 0? In other words, that my rendering process will not shift the cameras? What am I doing wrong?769Views0likes2Comments