Forum Discussion
ziphnor
11 years agoExpert Protege
Combining Oculus and perception neuron integrations
Hi,
First off a disclaimer: I am a complete noob when it comes to Unity, i only downloaded Unity to try to setup a simple VR demo combining my DK2 with my Perception Neuron tracking suit. I do work as a software developer professionally though (mostly in C#), so hopefully i have a chance :)
The neuron suite has Unity integration, and so far i have managed to insert their prefab Neuron Robot character into a free scene from the asset store, and can enjoy watching it mimic my movements. I can also place an OVRCameraRig and get a nice VR view of the robot flailing about. Now comes the tricky part: I would really like to have the camera match the head position of the robot. This is tricky for several reasons:
1. I have no idea how to attach the camera to the head :oops:
2. Even if i do manage to attach it, i would expect there to be a need of some sensor fusion, as both the neuron suite and the DK2 has their own idea about where my head is. I would expect that the DK2 is much more precise in this regard. A first try could be to only use rotation from the Rift and rely on the head position supplied from the neuron suit.
I am not expecting that someone can give me a complete answer here, but i am hoping that i can get some pointers to resources to look at and approaches that might be worth consideration.
For example, would it be easy to use a character intended to for first-person use and then attempt to attach the neuron animation script to it, or am i on the right track with the neuron robot?
For those not familiar with Perception Neuron:
https://www.kickstarter.com/projects/16 ... ion-neuron
I am pretty impressed with it so far, seems to be very low latency and very precise even though it cannot supply true positional tracking.
First off a disclaimer: I am a complete noob when it comes to Unity, i only downloaded Unity to try to setup a simple VR demo combining my DK2 with my Perception Neuron tracking suit. I do work as a software developer professionally though (mostly in C#), so hopefully i have a chance :)
The neuron suite has Unity integration, and so far i have managed to insert their prefab Neuron Robot character into a free scene from the asset store, and can enjoy watching it mimic my movements. I can also place an OVRCameraRig and get a nice VR view of the robot flailing about. Now comes the tricky part: I would really like to have the camera match the head position of the robot. This is tricky for several reasons:
1. I have no idea how to attach the camera to the head :oops:
2. Even if i do manage to attach it, i would expect there to be a need of some sensor fusion, as both the neuron suite and the DK2 has their own idea about where my head is. I would expect that the DK2 is much more precise in this regard. A first try could be to only use rotation from the Rift and rely on the head position supplied from the neuron suit.
I am not expecting that someone can give me a complete answer here, but i am hoping that i can get some pointers to resources to look at and approaches that might be worth consideration.
For example, would it be easy to use a character intended to for first-person use and then attempt to attach the neuron animation script to it, or am i on the right track with the neuron robot?
For those not familiar with Perception Neuron:
https://www.kickstarter.com/projects/16 ... ion-neuron
I am pretty impressed with it so far, seems to be very low latency and very precise even though it cannot supply true positional tracking.
13 Replies
Replies have been turned off for this discussion
- Anonymous

just make the main camera a child of the head. make sure you have 'Virtual Reality Supported' checked in the Player settings. If you want to only use orientation tracking, disconnect the webcam. (The DK2 works without a webcam)
Try this script to disable translation/rotation of your HMD by adding it to MainCamera.
using UnityEngine;
using System.Collections;
/// <summary>
/// selectively lock axes of an HMD to prevent motion
/// </summary>
public class HmdLock : MonoBehaviour
{
public bool rollLock, pitchLock, yawLock;
public bool xLock, yLock, zLock;
private Vector3 lastPosn;
private Quaternion lastRotation;
// Use this for initialization
void Start()
{
lastPosn = UnityEngine.VR.InputTracking.GetLocalPosition(UnityEngine.VR.VRNode.CenterEye);
lastRotation = UnityEngine.VR.InputTracking.GetLocalRotation(UnityEngine.VR.VRNode.CenterEye);
}
// Update is called once per frame
void Update()
{
if (rollLock || pitchLock || yawLock)
{
// undo rotation change since last frame
Vector3 rotationChangeEulers = new Vector3(0f, this.transform.parent.rotation.y, 0f); // undo Y-rotation of body
Vector3 rotationInverse = new Vector3();
if (rollLock)
{
rotationInverse.z = -rotationChangeEulers.z;
}
if (pitchLock)
{
rotationInverse.x = -rotationChangeEulers.x;
}
if (yawLock)
{
rotationInverse.y = -rotationChangeEulers.y;
}
this.gameObject.transform.Rotate(rotationInverse, Space.World);
}
if ((xLock || yLock || zLock))
{
// undo position change since last frame
Vector3 currentPosition = UnityEngine.VR.InputTracking.GetLocalPosition(UnityEngine.VR.VRNode.CenterEye);
Vector3 positionChange = currentPosition;
if (!xLock)
{
positionChange.x = 0f;
}
if (!yLock)
{
positionChange.y = 0f;
}
if (!zLock)
{
positionChange.z = 0f;
}
//this.gameObject.transform.Translate(positionChange, Space.Self);
this.gameObject.transform.localPosition = -positionChange;
}
// update last frame
lastPosn = UnityEngine.VR.InputTracking.GetLocalPosition(UnityEngine.VR.VRNode.CenterEye);
lastRotation = UnityEngine.VR.InputTracking.GetLocalRotation(UnityEngine.VR.VRNode.CenterEye);
}
/// <summary>
/// Compute the relative rotation between two quaternions. This is done by computing the conjugate of the reference quaternion and multiplying by the other quaternion.
/// </summary>
/// <param name="reference">angle relative to which the result should be calculated.</param>
/// <param name="other"></param>
/// <returns>The angle that 'reference' would need to be rotated through to get 'other'</returns>
private static Quaternion RelativeOrientation(Quaternion reference, Quaternion other)
{
Quaternion resultant;
Quaternion conjugate = new Quaternion
(
-reference.x,
-reference.y,
-reference.z,
reference.w
);
resultant = other * conjugate;
return resultant;
}
public void ResetCorrection()
{
this.transform.localPosition = new Vector3(0.0f, 0.0f, 0.0f);
this.transform.localRotation = Quaternion.identity;
}
} - ziphnorExpert ProtegeThanks! Thats a much more detailed answer than i was hoping for. Looking forward to try it out.
- BenKaniobiHonored GuestI tried this but somehow I can't get it to work properly. I tried adding the script to the main camera and I still get the same result as without it (I added a debug.log to the start function of the script to make sure it really was executed). So I tried adding it to the center eye anchor of the rift but I still see no change. Any hints on what I'm doing wrong? :?
- vrdavebOculus StaffMaking the camera a child of the head bone is probably going to double-count your head's motion. The head bone's transform is controlled by the Perception Neuron and the camera's transform is controlled by the Rift. You're probably right that the Rift's tracking system will give a more accurate head pose, but we still need to know the relationship between the head and the rest of the body. I would suggest making the camera the root GameObject of your rig and writing a script that moves the body rig each frame so that the head bone's world pose matches the camera's world pose.
- TwitchmonkeyExplorerI'm curious about the use of the Neuron for live performances. Is it currently possible to stream this mocap data from a server to a client? Or is it only possible to capture the mocap information in Unity and then use that for animations? I imagine if this were possible someone would have done it by now, but is it something would could reasonably expect to see in the near future?
- octaspaceHonored GuestI'm also searching to disable the build in tracking in the dk2 in Unity 5. Any help would be great!
- vrdavebOculus StaffPlease see viewtopic.php?f=37&t=30078. The best option right now is to overwrite the tracked pose in the OVRCameraRig.UpdatedAnchors event.
- octaspaceHonored Guest
"vrdaveb" wrote:
Please see viewtopic.php?f=37&t=30078. The best option right now is to overwrite the tracked pose in the OVRCameraRig.UpdatedAnchors event.
Thanks, I already tried to mess around with that part of the code but I'm still a beginner with c# and had no success. Would be so nice if you could give me a clue how the code to deactivate the internal trackers should look like. Thanks in advance! - vrdavebOculus StaffTry a script like this:
using UnityEngine;
using System.Collections;
public class TrackingModeChanger : MonoBehaviour {
void Awake()
{
foreach (var rig in GameObject.FindObjectsOfType<OVRCameraRig>())
rig.UpdatedAnchors += OnUpdatedAnchors;
}
void Update ()
{
OVRManager.tracker.isEnabled = !Input.GetKey(KeyCode.P);
}
void OnUpdatedAnchors(OVRCameraRig rig)
{
if (Input.GetKey (KeyCode.R))
rig.trackingSpace.FromOVRPose(rig.centerEyeAnchor.ToOVRPose(true).Inverse(), true);
}
} - octaspaceHonored GuestThanks alot! But this is what happens when I attach the script to OVRCameraRig. When I uncheck "Use Position Tracking" the tracking stutters every second otherwise it stops for 1 second when I press "P" :S
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 7 years ago