I'm not sure I'll have any hand gesture based UI by the time PAX rolls around. I'm still leaning towards the Rasp Pi for driving the headset, which prevents the possibility of using the depth sense camera to recognize gestures. Now that the Leap Motion is shipping I've ordered one to see how easy it will be to integrate.
The DepthSense's color camera is not the best, has poor resolution and has pretty bad performance in low light. This strikes me as ironic, since the depth sensing hardware actually gets better performance the lower the ambient light levels. I've decided to go ahead with another webcam. I picked up a Logictech C920 from the local Fry's. It performs very well in daylight or at night, maintaining a high framerate regardless.
In order to keep a reasonable field of vision, I also purchased the fisheye lense listed above. The lens itself has a small magnetic ring around the base so it will attach to any ferrous surface.
The webcam didn't have an appreciable amount of metal in it, so I disassembled it and epoxied a metal washer to the front surrounding the webcam lens:
The resulting mechanism works well. In fact, the webcam's distortion is almost the exact inverse of that required by the Rift, so I'm able to render the images directly to the screen without a distortion pass on the image and still see reasonably well.
The painter's tape adhering the webcam to the rift is a temporary solution.
38 days to go.
The DepthSense 325 is a combination color / depth camera using time-of-flight technology allowing extremely short range interaction. I'm planning on attaching this to the Rift itself to use as the basis for pass-through vision rendered on the HMD, possibly with the addition of a webcam fisheye-lens. I'd like to get a gesture based UI working as well, but I doubt I'll have time before the event
Currently the RaspPi seems to the best option for driving the display, since I can use OpenCV for reading the camera, and GLES for rendering to the HMD. The only drawback is that the camera has no drivers for ARM, and while the color stream works automatically via the UVC framework, trying to access the depth stream just hangs. Another alternative is using one of my ZBox devices, but they'll likely draw much more power, so I don't know if it's practical.
I'm also probably going to add at least one more webcam to the design, possibly mounted somewhere other than the headset, and use the head-tracking to merge the two viewpoints into a single view on the headset. If I follow that path I'll probably be attempting to use magnets integrated into the output at known locations to give the HMD a better frame of reference and prevent drift.
I'll update this as the costume comes together more. 53 days to go (I'm counting towards PAX Dev).
Finally got this done for the third day of PAX and it was a pretty big hit.
"geekmaster" wrote: I hope to see the video, and the source code too.
No video yet, but there's a post on it at Rifty Business which includes a picture and a link to the source, which I'm including here as well. Nothing really that interesting since I'm not using the head tracker and not really doing any distortion.
Hi there, I am new at developing for the Oculus Rift. I am interested in getting this to work for me. I have created a blank project in VS C++ 2010, and I have placed the header files in the header folder and the source files in the source folder. However, when I try to compile it all, I get errors like this one: fatal error C1083: Cannot open include file: 'EGL/egl.h': No such file or directory
I am wondering if I am doing something wrong, or if I need to install some things. If you can help me out, that would be great. Thanks!
The code was written for the Raspberry Pi which uses OpenGL ES and EGL for GL window creation. If you want to run the code on a Windows machine you need to replace EGL with something else, like GLFW, and possibly port the code from GL ES to normal OpenGL.