cancel
Showing results for 
Search instead for 
Did you mean: 

Rift full-body avatar with Kinect, Razer Hydra, and PS Move

Ttakala
Honored Guest
Over one month ago we released the downloadable TurboTuscany demo with support for Oculus Rift with Kinect, Razer Hydra, or PS Move. Since then we released a new video demonstrating full-body avatar control using Kinect + PS Move combination:



Below I sum what we learned while developing the TurboTuscany demo. Some of our findings are consequential, while some are common knowledge if you have developed stuff for Razer Hydra, Kinect, or PS Move before.

Latencies of used devices:
Oculus Rift > Razer Hydra > PS Move > Kinect

Body tracking with Kinect has an easily noticeable lag, has plenty of jitter, and the tracking fails often. Nevertheless, Kinect adds a lot to the immersion and is fun to play around with.

From all the positional head tracking methods available in our TurboTuscany demo, PS Move is the best compromise: big tracking volume (almost as big as Kinect's) and accurate
tracking (not as accurate as Razer Hydra though). Therefore the best experience of our demo is achieved with Oculus Rift + Kinect + PS Move. Occlusion of the Move controller from PS Eye's view is a problem though for positional tracking (not for rotational).

Second best head tracking is achieved with combination of Oculus Rift, Kinect, and Razer Hydra. This comes with the added cumbersomeness of having to wear Hydra around the waist.

My personal opinion is that VR systems with a virtual body should track the user head, hands, and forward direction (chest/waist) separately. This is so that the user can look into different direction than the direction where he is pointing a hand-held tool/weapon, while walking in a third direction. In TurboTuscany demo we achieve this with the combination of Oculus Rift, Kinect, and Hydra/Move.

Latency requirements for positional head tracking

The relatively low latency of Razer Hydra's position tracking should be low enough for many HMD use cases. If you're viewing objects close, the Hydra's latency becomes apparent however when moving your head. Unless STEM has some new optimization tricks, it will most likely have different latency (higher?) than Hydra because it's wireless.

If head position tracking latency is less or equal to Oculus Rift's rotational tracking, that should be good enough for most HMD applications. Since this is not a scientific paper that I'm writing here, I won't cite earlier research that suggests latency requirements in milliseconds.

Because we had positional head tracking set up to track the point between eyes, we first set Oculus Rift's "Eye Center Position" to (0,0,0) which determines a small translation that follows the orientation of Rift. But we found out that the latency of our positional head tracking was apparent when moving the head close (>0.5 meters) to objects, even with Razer Hydra. Therefore we ended up setting "Eye Center Position" to the default (0, 0.15, 0.09), and viewing close objects while moving became much more natural. Thus, our positional head tracking has a "virtual" component that follows the Rift's orientation.
12 REPLIES 12

Kamus
Protege
Thank you for sharing your findings!
I've yet to try your demo, i have yet to get around installing those kinect drivers on my PC.
The PrioVR stuff looks like the best solution for full body tracking once it comes out. Are you a backer of that project?
It'd be awesome if your demo had support for it when it comes out.
I think that in the end, the most practical solution will be to have a much improved version of the Kinect to do body tracking. It would probably require more than one device to make a full working model of your body, but it's probably the easiest and more realistic way to track our movements (you could basically "3d scan" yourself into the game)

But i wonder if it's even possible to get the latency down to where we need it, along with the other shortcomings you mention worked out. But if the new Kinect is any indication, it might just be a matter of time before we can get ultra-high resolution models with negligible latency.

Also, i have another question for you. How hard was it to implement all those different methods? With all the sensors coming out such as the STEM and PrioVR (along with the other stuff you are already using) It would be nice to know how hard it was to implement so people can have realistic expectations about support for their sensors of choice in any given game.

Ttakala
Honored Guest
Good points Kamus!

"Kamus" wrote:
Thank you for sharing your findings!
I've yet to try your demo, i have yet to get around installing those kinect drivers on my PC.
The PrioVR stuff looks like the best solution for full body tracking once it comes out. Are you a backer of that project?
It'd be awesome if your demo had support for it when it comes out.


I'm not a backer of PrioVR, at least not yet. It looks like an affordable version of XSENS MVN tracking system. I'll probably wait and see whether STEM or PrioVR comes out top.
We created our TurboTuscany demo with RUIS, which is our virtual reality add-on for Unity 3D. We'll add support for other devices in the future, and developers can use RUIS to create their own immersive VR applications. It remains to be seen what devices will be supported however.

"Kamus" wrote:

I think that in the end, the most practical solution will be to have a much improved version of the Kinect to do body tracking. It would probably require more than one device to make a full working model of your body, but it's probably the easiest and more realistic way to track our movements (you could basically "3d scan" yourself into the game)

But i wonder if it's even possible to get the latency down to where we need it, along with the other shortcomings you mention worked out. But if the new Kinect is any indication, it might just be a matter of time before we can get ultra-high resolution models with negligible latency.


I agree, Kinect is best solution also in the sense that you don't have to wear a lot of sensors. I can't imagine many gamers be willing to spend several minutes putting on 17 sensors before starting a game.

Depth camera solutions (Kinect) will always have more latency than wearable sensors, because there are more steps in-between: 1) Depth-image generation, 2) Image segmentation into background and users, 3) Pose-estimation. Steps 2) and 3) take most time because the algorithms tend to be sophisticated and there are a large number of possible outcomes. I do not know if the latency can be pushed below VR requirements in the near future, without implementing the algorithms in depth camera hardware. That being said, I look forward to Kinect 2's performance.

"Kamus" wrote:
Also, i have another question for you. How hard was it to implement all those different methods? With all the sensors coming out such as the STEM and PrioVR (along with the other stuff you are already using) It would be nice to know how hard it was to implement so people can have realistic expectations about support for their sensors of choice in any given game.


Are you talking about just head tracking or full-body controlled avatar?

Kamus
Protege
"Ttakala" wrote:


Are you talking about just head tracking or full-body controlled avatar?


Full-body controlled avatar 😄

Ttakala
Honored Guest
"Kamus" wrote:

Full-body controlled avatar 😄


We created our RUIS for Unity toolkit so that it would be simple to implement full-body tracked avatars, whose body parts can be blended with canned animation clips using Unity's Mechanim system. Right now our toolkit's full-body animation features are supported only via Kinect though.

If you are not using a helpful toolkit such as ours, it does take work to create full-body tracked avatar in game engines. With PrioVR it should be relatively simple since their system provides rotations for all body joints. I suspect that in their videos they used Kinect to get the root bone position, as they need to use some external tracking system for that. With STEM it will be hard for an inexperienced developer to animate the whole body, unless STEM programming library provides inverse kinematic calculations for the untracked joints.

gallantpigeon
Honored Guest
Would it be possible to perform a bit of sensor fusion with the oculus accelerometer / Kinect to get goodish head tracking? In theory the latency from the accelerometer would be low enough to give immersion for head tracking, but the position would drift with time due to accumulating sensor errors. Perhaps smoothly accelerating and then decelerating the predicted head position derived from the oculus accelerometer towards the high latency but drift free position tracked by the Kinect when a large discrepancy (the discrepancy helping to filter the Kinect jitter) is observed.

I guess my point is that I think the head need not be tracked accurately, only roughly, over large spaces.... as long as the perception of movement is reflected in the digital space with minimum lag, it should give the illusion that your and the avatar's head is exactly in sync, when in fact the Kinect's tracker with probably only get a actual distance accurate to within 1ft.

Ttakala
Honored Guest
"gallantpigeon" wrote:
Would it be possible to perform a bit of sensor fusion with the oculus accelerometer / Kinect to get goodish head tracking? In theory the latency from the accelerometer would be low enough to give immersion for head tracking, but the position would drift with time due to accumulating sensor errors. Perhaps smoothly accelerating and then decelerating the predicted head position derived from the oculus accelerometer towards the high latency but drift free position tracked by the Kinect when a large discrepancy (the discrepancy helping to filter the Kinect jitter) is observed.

I guess my point is that I think the head need not be tracked accurately, only roughly, over large spaces.... as long as the perception of movement is reflected in the digital space with minimum lag, it should give the illusion that your and the avatar's head is exactly in sync, when in fact the Kinect's tracker with probably only get a actual distance accurate to within 1ft.


It's possible but difficult to get noticeable improvements. I think it would need considerable sensor fusion effort to improve Kinect head tracking with an IMU (accelerometers + gyros), probably requiring the use of machine learning as part of the implementation. You would also need a really good quality IMU.

The problem is that using only an IMU for positional tracking will result in unusable data (even on short distances); The tracked 3D position will not just drift, it will fly away like a butterfly in a random direction as soon as you move the sensor. That's my experience with current generation of cheap MEMS IMUs and simple data fusion schemes anyway.

d4n1
Honored Guest
This looks interesting; I placed an order for the OCR a couple of days ago, so until it arrives I have some time to start researching some interesting things I would like to try. Thankfully my basement is quite big so I wont have to worry about slamming into a wall hopefully. I've been looking at the STEM System from Sixense http://sixense.com/wireless so I cant wait for that to get underway. Apparently it will be "released" in 2014 but it will probably be wrapped with some Razor tags and Green goop.

Ttakala
Honored Guest
"d4n1" wrote:
This looks interesting; I placed an order for the OCR a couple of days ago, so until it arrives I have some time to start researching some interesting things I would like to try. Thankfully my basement is quite big so I wont have to worry about slamming into a wall hopefully. I've been looking at the STEM System from Sixense http://sixense.com/wireless so I cant wait for that to get underway. Apparently it will be "released" in 2014 but it will probably be wrapped with some Razor tags and Green goop.


I'm looking forward to get my hands on STEM as well. Hopefully we can use it to replace PS Move controllers in our system.

needsloomis
Honored Guest
That's pretty awesome, although a little silly with a move controller attached to your head 😃

Have you considered AR with the hydra and a fast webcam like the PS3 Eye? Affix an AR target sticker to the front of the rift, and maybe stick a simple dot on each shoulder. Might be more accessible to the end user.