Showing results for 
Search instead for 
Did you mean: 

Hand Interaction with OVRCameraRig and OVRInteraction, what's going on?

Level 2

Hey, I'm experimenting with Hand Tracking for the Quest 2, but I'm a little unsure as to the connections between different hands. [Referring to the screengrab below...] You can use the basic hands (by adding a prefab under the OVRCameraRigPoseExample > TrackingSpace > Left/RightHandAnchor). However, to identify poses, you need to use the OVRInteraction setup (which is parented under OVRCameraRig)... but now I have two places dealing with hands. Also, as in the image, you no longer use a prefab hand under the TrackingSpace hands. But it seems I need both of them for everything to work for poses, does anyone know the purpose of each? The tracking space ones have the OVRHand.cs script, where as the hands under OVRInteraction are using the Hands.cs script.  I borrowed this from the PoseExamples scene in the Oculus samples folder. Just trying to understand how it works - rather than calling it magic ...



Any help wrapping my head around this is much appreciated. Thanks.