Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
rkkonrad's avatar
rkkonrad
Explorer
8 years ago

Independent Camera Pose Control

Hi there! I have a rather nuanced question and I hope there is an easy answer! I was wondering if there is a way to independently control the left and right eye camera poses? And if so. From where? I have the newest Oculus SDK and Unity Plugin (1.31 and 1.30.0, respectively) and have been picking at the OVRCameraRig.cs but whenever I make any modifications to the anchor points the cameras don't seem to update. Can pose updates to the cameras be done in UpdateAnchors() or are the anchors only intended to have things attached to them? I've tried updating as well in LateUpdates() like the following, and it updates the Rotation in the Unity Gui but has no effect on the camera itself.

private void LateUpdate() {
OVRHaptics.Process();
var lefteyeanchor = GameObject.Find("LeftEyeAnchor");
lefteyeanchor.transform.localRotation = Quaternion.Euler(0, 90, 0);
}

I know this is must be a rather odd question because why would anyone want to do something so weird!? But I'm looking into a specific depth cue and need control over these cameras independently. I just need to add small independent rotations to the left and right cameras after they have been transformed into head space (i.e. after the tracker has performed its transform). Is this possible? I've read somewhere that Unity performs the local rotation and translation transforms of the left and right eyes relative to the tracking space.   

6 Replies

Replies have been turned off for this discussion
  • Hi @imperativity! Thanks for the response. I wasn't aware of that sample framework, but when I went through it in detail I couldn't quite find anything that helps with my problem. Essentially what I'm trying to do is apply a transform to the left and right cameras once their positions and poses have been completely set (even after the IPD transform). This is how I understand things to work currently. Unity takes in the tracker information (from UnityEngine.XR.InputTracking) and applies that transform to each camera along with the eye specific IPD shift to get the left and right eye views. What I need to do is apply a transform after all of this has already been done. Is this possible or does Unity do all of this under the hood?
  • Hi @imperativity! Thanks for the response. I wasn't aware of that sample framework, but when I went through it in detail I couldn't quite find anything that helps with my problem. Essentially what I'm trying to do is apply a transform to the left and right cameras once their positions and poses have been completely set (even after the IPD transform).

    This is how I understand things to work currently. Unity takes in the tracker information (from UnityEngine.XR.InputTracking) and applies that transform to each camera along with the eye specific IPD shift to get the left and right eye views. What I need to do is apply a transform after all of this has already been done. Is this possible or does Unity do all of this under the hood?
  • Hi @imperativity! Thanks for the response. I wasn't aware of that sample framework, but when I went through it in detail I couldn't quite find anything that helps with my problem. Essentially what I'm trying to do is apply a transform to the left and right cameras once their positions and poses have been completely set (even after the IPD transform).

    This is how I understand things to work currently. Unity takes in the tracker information (from UnityEngine.XR.InputTracking) and applies that transform to each camera along with the eye specific IPD shift to get the left and right eye views. What I need to do is apply a transform after all of this has already been done. Is this possible or does Unity do all of this under the hood?
  • Did you ever figure this out? I want to do the same thing.
  • The features of this method are as follows. First, this method can deal with a large amount of uncertain data, such as in the case of any shooting angle, in the case of any reference point, and in the case of a small number of feature points. Finally, because of using Internet of Things technology.
  • I wasn't aware of that sample framework, but when I went through it in detail I couldn't quite find anything that helps with my problem. Essentially what I'm trying to do is apply a transform to the left and right cameras once their positions and poses have been completely set (even after the IPD transform).