Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
noirflux's avatar
noirflux
Honored Guest
13 years ago

Rift and Creative Gesture Camera - Hands in VR!

Quick hack putting an Intel Creative gesture Camera on a Rift to see if my hands would then work in VR. Not bad results so far; I was worried the Gesture Cam hand tracking would not work upside-down (normally they are oriented over a monitor, not over your head) but it certainly works. It works well enough I can touch my fingertips together.

The RIft could benefit HUGELY by having some kind of hand-tracking, hint hint.

Rift-Intel_800.png

Rift-Katy-1-800.jpg

3 Replies

  • That's looks great. Is the hand tracking code you're using part of the perceptual SDK, or something custom? Also, how are you reconciling the perspective of the camera with the perspective of the Rift? Do you unproject from a camera projection matrix and then reproject into the scene projection after doing a transform?
  • Thanks! This is using the perceptual SDK (and the Rift's), via plugins into "vvvv", which is what I work in. (The plugins were developed by vvvv user Herbst and are available on the vvvv contributions page.)

    All I do here is take the camera-relative hand position data (XYZ) as reported by the SDK, apply the fixed transform for its orientation relative to the Rift, and then apply the Rift orientation transform. Things stay tight, as the hand data is just one transform away from the Rift view; from the user's perspective there is no chain of errors.

    In practice I'm also doing head tracking for walk-around VR, and even though absolute world-position accuracy is rather loose (it depends on the Rift's orientation accuracy), it still works great for "touching" virtual objects. Now if you are trying to touch a REAL physical object as represented in the Rift VR, then it is a problem as you do get a chain of errors; I expect using the Gesture Cam depth data will be needed to correlate the physical object into the VR, but that's work yet to do.
  • "noirflux" wrote:
    This is using the perceptual SDK (and the Rift's), via plugins into "vvvv", which is what I work in.


    I've heard of vvvv and looked at it's website, but I work primarily in Linux, so I haven't really taken the time to play around with it. I did play with a 'vvvv' clone/toy that was developed using Three.js here: https://github.com/idflood/ThreeNodes.js

    I like the style of visual programming, and I wish I had the time to make something similar for Linux, or that they'd port vvvv.

    "noirflux" wrote:
    In practice I'm also doing head tracking for walk-around VR, and even though absolute world-position accuracy is rather loose (it depends on the Rift's orientation accuracy), it still works great for "touching" virtual objects.


    I have one of these cameras, but from Softkinetic rather than through Intel. My experience is that even though they are only rated for a couple of meters, in indoor environments you often get surfaces from much further away. I've been tempted to do some work on a) realtime environment reconstruction and b) integrating the depth data with the head tracker data to do another layer of sensor fusion, allowing true 6DOF for the Rift and improved Yaw correction that wouldn't be based on the magnetic sensor.

    "noirflux" wrote:
    Now if you are trying to touch a REAL physical object as represented in the Rift VR, then it is a problem as you do get a chain of errors; I expect using the Gesture Cam depth data will be needed to correlate the physical object into the VR, but that's work yet to do.


    That's going to be tricky. From the Softkinetic forums and experience I learned these cameras tend to have a high bias that can change each time initialize the camera, i.e. something that was reported at 30 cm away before is now reported at 20 cm, even though it's actual position is 35 cm. It implies you need to create a registration mechanism to calculate the bias every single time you start your application. I suppose it would provide an entertaining opportunity to attach something like this to the rift:



    I have some experience with this problem as well. Though I haven't been using the depth camera with Rift much yet, I have used a simple color webcam with a wide angle lens attached to provide a kind of passthrough vision application when I used the Rift as part of a costume at PAX. It would be handy to have some open source code for calibrating and correcting such real world imagery, either depth or color based.