(Unity) OVRPlayerController- How to get OVRPlayerController to move with OVRCameraRig
I'm working off the standard OVRPlayerController, which has an OVRCameraRig as its child. This is a game where I need thumbstick locomotion (which the OVRPlayerControllerProvides), but I also need roomscale movement. In other words, I need to make sure that when the player moves physically, his in game avatar should also move. Currently whats happening is that when the player moves physically, the OVRcameraRig moves with him, but the parent OVRPlayerContoller does not move. This is an issue because I need my OVRPlayerController to move with my player at all times for proper collision tracking and targeting by Hostile AI. What is the best way to achieve this? Iv'e tried a few ways to make it work but wondering what the cleanest solution is. I'll also need hand tracking for this game. Perhaps I should simply use the AvatarSDK standard avatar and make it a child of a Character Controller for thumb stick movement? thanks for the help!20KViews1like11CommentsInteraction SDK Teleport change trigger mapping.
I am trying to change the controller mapping for Teleport using Meta Interaction SDK Locomotion. I can see its set in Joystick Axis 2D Active State in the Selector and TeleportActiveState. But I would like to use buttons 2 and 4 to trigger Teleport. I used OVRButtonActiveState instead of the Axis2DActiveState scripts. But the Arc don't render. Has someone been able to change the teleport button to something else? Having it in the joystick creates a lot of undesired teleport actions and users are complaining a lot about it.Solved2.1KViews0likes4CommentsApplying Different Post Processing Effects to Left and Right Cameras with OVRCameraRig
Hello, I want to apply different post processing effects to each eye in my application for the Meta Quest 2, using Unity. My Unity Version is 2022.3.7f1, and I am using the latest Occulus Integration assets package, 55.0. As suggested in the documentation, I have replaced my main camera with an OVRCameraRig, and succesfully compiled and ran the application. Now, I want to apply different post processing effects to the left eye and the right eye. I have checked the "Use Per Eyes Cameras" checkbox, and applied different solid color backgrounds to LeftEyeAnchor and RightEyeAnchor, but it seems that only one of the backgrounds is used, both when running in unity and when running on the headset. An interesting phenomena is that in the Unity editor, when I change the background color of *either* camera, it changes the background color in both eyes. The cameras' locations seem to be in the correct places.954Views0likes0Commentsstereo seperation and stereo convergence with the Unity camera and a custom renderer
I have created a distributed ray tracer using a shader in Unity. I would like to show this in stereo and be able to navigate with my Oculus Rfit. What I have done so far is create two identical quads which I assign to the LeftAnchor and the RightAnchor in the OVRCameraRig. I add a material to each quad which uses the same shader. I create a left and right layer and assign the culling mask for each eye to each layer. I made sure the quads are large enough that they fill the FOV when I put the HMD on. In my shader I set the cameras to be +- 0.0325m apart along the right camera axis. This gives me stereo and it looks quite awesome with my real time ray tracer. However, I'm not sure if I am doing things ideally. I notice that when I enable VR support in player settings that the Unity Camera ads options stereoSeperation and stereoConvergence. How are these options used in Unity? Is am a bit surprised and confused by the stereoConvergence option. Currently my left and right cameras have the same axis e.g. forward points in the same direction so I don't think they ever converge. The default stereoConvergence in unity is 10 which I think means they converge at 10 meters? Is this really what Unity does? Should I be implementing convergence in my shader for a better stereo effect? I would imagine that with 64mm IPD that a convergence at 10m would have no significant effect. I perhaps naively assume that the left and right eye could be treated as two cameras with the same axis with just a different position so they never converge. How can any convergence be assumed without eye tracking? If anyone has any other suggestion on how I should implement my ray tracing shader for stereo in Unity I would be grateful for your comments. Here is my main code assigned to the OVRCameraRig for navigation and setting up the shader each frame. Start //left and right are the names of quads assigned to the left and right eye rendl = GameObject.Find ("left").GetComponent<Renderer> (); rendr = GameObject.Find ("right").GetComponent<Renderer> (); Update //I use the LeftEyeAnchor for now because the Camera is the same as the RightEyeAnchor as far as I know Transform tran = GameObject.Find ("LeftEyeAnchor").transform; //left hand to right hand camera transforms ? tran.Rotate (new Vector3 (0, 180, 0)); tran.localRotation = new Quaternion (-tran.localRotation.x, tran.localRotation.y, tran.localRotation.z, -tran.localRotation.w); Vector4 up = GameObject.Find ("LeftEyeAnchor").transform.up; Vector4 right = GameObject.Find ("LeftEyeAnchor").transform.right; Vector4 forward = GameObject.Find ("LeftEyeAnchor").transform.forward; Vector4 position = GameObject.Find ("LeftEyeAnchor").transform.position; //Debug.Log (up + " " + right + " " + forward + " " + position); rendl.material.SetVector("_vec4up", up); rendr.material.SetVector("_vec4up", up); rendl.material.SetVector("_vec4right", right); rendr.material.SetVector("_vec4right", right); rendl.material.SetVector("_vec4forward", forward); rendr.material.SetVector("_vec4forward", forward); float x = Input.GetAxis ("Horizontal"); float y = Input.GetAxis ("Vertical"); float speed = 0.01f; this.transform.position -= GameObject.Find ("LeftEyeAnchor").transform.forward * y * speed; this.transform.position += GameObject.Find ("LeftEyeAnchor").transform.right * x * speed; rendl.material.SetVector("_vec4o", this.transform.position); rendr.material.SetVector("_vec4o", this.transform.position); Here is a screen shot from the scene view8.1KViews0likes6CommentsSeperate Geometry displayed in Left / Right Eye to create Stereo Photos on Quest?
Dear community, we have gathered today, to discuss a rather important question: Is it possible to render different things in left and right eye, respectively? Everything works flawlessly in the Editor and Oculus Link, but the build doesn't show the texture in 3D, only flat. This is my setup: In Unity I put 2 Planes at the same location, 1 with left eye texture, 1 with right eye texture. Both are set to different layers. Culling masks of OVRCameraRig's 'LeftEyeAnchor' / 'RightEyeAnchor' are set to exclude one of the two layers, so the left eye sees plane 1, right eye sees plane 2. In OVRCameraRig Settings I already enabled "Use Per Eye Cameras". This works with Oculus Link. Goal: Display a stereoscopic photo inside a Unity scene Problem: The build doesn't work on the Quest. Is that because it renders in a single pass? Any help appreciated! The right solution receives 10$ via paypal ;) Best Felix2KViews0likes2CommentsHow do I get the player controller to follow along with the camera rig?
Using the example player controller that is given in the oculus integration package, the OVRCameraRig will not update the position of the OVRPlayerController when the user physically moves around. I have tried standard ways of doing this such as updating position and translating, but nothing has worked. Does anyone know a good way to do this?1.1KViews0likes0CommentsHow to control the camera offset based on movements from the VR controller?
Hi Everyone, Now, we are starting to make our VR Health prototype ( we have a lot of work but we are very excited to do it !!) Our VR Game will be for people with disabilities for health problems (for example Parkinson, Stroke, etc), for that reason, we need to build it according to this pattern: walkinvrdriver.com/move-rotate-virtual-reality-disabled About this topic, I found out that primarily need to control the camera offset based on movements from the controller. Our goal is to build the same pattern amplifying the movements (up/down/ left/right) or bringing the place where the patient has to move through the controllers due to their poor mobility (for example, if the patient moves the VR controller 1 cm, the will move 10 cm in the game or bring the world towards him) to amplify the movements or move the around the Game Scene with that VR controllers without the patients has to move out of place. We will do it for Oculus Quest, and we are thinking of making it by OVRCameraRig, for that reason, I would need if someone could help us with some example in Unity, to take it as a "reference" to make our Movement and Rotation for the patients. Would it be possible? Thanks for your time Best Regards Alejandro1KViews0likes0Commentsit begins to flicker in the right eye at a certain angle
Hello, I am new to programming for oculus quest and am trying to solve one problem. If I look at a certain angle, my right eye starts to flicker a lot. If anyone knows the reason, please help solve this problem. Thank you very much in advance. I use unity 2019.4.12f1.496Views0likes0CommentsI'm using OVRCameraRig but it builds like a screen
Hello! Help me! version:2019.4.9f1 Even if OVRCameraRig is added to the scene and built, the image cannot be seen 360 degrees and the image appears like a screen. No error occurs, you can look around 360 degrees as usual when you build by simply placing an object in a new scene. Please let me know if there are any changes to the settings that come to mind. Thank you.536Views0likes0CommentsIndependent Camera Pose Control
Hi there! I have a rather nuanced question and I hope there is an easy answer! I was wondering if there is a way to independently control the left and right eye camera poses? And if so. From where? I have the newest Oculus SDK and Unity Plugin (1.31 and 1.30.0, respectively) and have been picking at the OVRCameraRig.cs but whenever I make any modifications to the anchor points the cameras don't seem to update. Can pose updates to the cameras be done in UpdateAnchors() or are the anchors only intended to have things attached to them? I've tried updating as well in LateUpdates() like the following, and it updates the Rotation in the Unity Gui but has no effect on the camera itself. private void LateUpdate() { OVRHaptics.Process(); var lefteyeanchor = GameObject.Find("LeftEyeAnchor"); lefteyeanchor.transform.localRotation = Quaternion.Euler(0, 90, 0); } I know this is must be a rather odd question because why would anyone want to do something so weird!? But I'm looking into a specific depth cue and need control over these cameras independently. I just need to add small independent rotations to the left and right cameras after they have been transformed into head space (i.e. after the tracker has performed its transform). Is this possible? I've read somewhere that Unity performs the local rotation and translation transforms of the left and right eyes relative to the tracking space.1.7KViews1like6Comments