Forum Discussion
sven
13 years agoProtege
Positional tracking calibration
I'm still hopeful regarding the idea of approximating positional tracking for scenarios with limited mobility (e.g. the users sitting on a chair) with the accelerometers inside the rift tracker.
I think it may be worthwhile to capture real positional data recorded by a different system (Razer Hydra, MS Kinect or PS Move) and correlate it with the readings from the sensor.
Do you think this is worthwhile? Has anyone tried it?
I think it may be worthwhile to capture real positional data recorded by a different system (Razer Hydra, MS Kinect or PS Move) and correlate it with the readings from the sensor.
Do you think this is worthwhile? Has anyone tried it?
74 Replies
- geekmasterProtege
"Tgaud" wrote:
The best is the radioFrequency Position tracking.
(4 little receiver in your desktop, an emitter in the occulus rift, and it will "ping" every receiver.
By the time difference beetween the reception of each receivers, they can tell precisely where is the emitter in the room).
Works in all condition, and has every advantage and precision possible.
If you are going to mount it in your desktop, then why not just use a wired solution? Wireless has to deal with radio interference and bandwidth issues, and probably transmitter licensing issues as well, and certainly FCC (and other agency) certification if you plan to sell it.
Using the built-in hardware is basically free (certainly not the rather high cost of the RF tracking gear that you suggest). If you really need all that accuracy for a specific purpose, great. But for gaming? I don't think so. Especially for wide general audience who just spent all their spare funds on a Rift...
For Mom and Pop gaming, I think that the methods I suggested are sufficient. The "best" is great and all, especially if you have a research grant to pay for it...
Oh, and your absolute statement is not true... In fact, your "market-droid" phraseology sounds like you have an agenda here -- do you sell such things or something? Perhaps a disclaimer would be in order... - geekmasterProtege
"Tgaud" wrote:
if when moving 30cm you have 1mm error, nothing observable
then, if you play 4hours, you can assume that you'll have moved 3Km, so the 1mm error become a 1meter error.
Where do you get those numbers?
There are a lot of people and a lot off applications that would LOVE to have only a 1meter error after a 4hour gap in their GPS data. In fact, 1meter error just during the time it takes to drive through a tunnel would be pretty handy. Even the military would like that kind of accuracy without having to stick atomic clocks into their positioning systems to adjust for IMU errors during GPS dropouts.
Do you actually know what you are talking about? I would love to see some references (such as URLs) that support some of your claims...
FYI, your comparisons above differ by an order of magnitude in their error rates... - TgaudHonored Guest
"geekmaster" wrote:
"Tgaud" wrote:
The best is the radioFrequency Position tracking.
(4 little receiver in your desktop, an emitter in the occulus rift, and it will "ping" every receiver.
By the time difference beetween the reception of each receivers, they can tell precisely where is the emitter in the room).
Works in all condition, and has every advantage and precision possible.
If you are going to mount it in your desktop, then why not just use a wired solution? Wireless has to deal with radio interference and bandwidth issues, and probably transmitter licensing issues as well, and certainly FCC (and other agency) certification if you plan to sell it.
Using the built-in hardware is basically free (certainly not the rather high cost of the RF tracking gear that you suggest). If you really need all that accuracy for a specific purpose, great. But for gaming? I don't think so. Especially for wide general audience who just spent all their spare funds on a Rift...
For Mom and Pop gaming, I think that the methods I suggested are sufficient. The "best" is great and all, especially if you have a research grant to pay for it...
Oh, and your absolute statement is not true... In fact, your "market-droid" phraseology sounds like you have an agenda here -- do you sell such things or something? Perhaps a disclaimer would be in order...
No, the frequency is different. Its the same frequency used in medical for radio images.
no compatibility problem.
And the interest is to move over all your room and being located precisely.
look this video :
https://www.youtube.com/watch?feature=player_embedded&v=mYyFUQbWC1E
there is also a topic here :
viewtopic.php?f=25&t=787&p=11215&hilit=radio#p11215 - TgaudHonored Guest
"geekmaster" wrote:
"Tgaud" wrote:
if when moving 30cm you have 1mm error, nothing observable
then, if you play 4hours, you can assume that you'll have moved 3Km, so the 1mm error become a 1meter error.
Where do you get those numbers?
There are a lot of people and a lot off applications that would LOVE to have only a 1meter error after a 4hour gap in their GPS data. In fact, 1meter error just during the time it takes to drive through a tunnel would be pretty handy. Even the military would like that kind of accuracy without having to stick atomic clocks into their positioning systems to adjust for IMU errors during GPS dropouts.
Do you actually know what you are talking about? I would love to see some references (such as URLs) that support some of your claims...
FYI, your comparisons above differ by an order of magnitude in their error rates...
well i take it from this message :That's all well and good, but unless you actually test it, you'll never know.
As for accelerometer based position tracking...
I tried that out once a long time ago, and while I'm sure there is possibility for improvement, let me tell you, those test results were not encouraging.
The result of trying to track position with an accelerometer were such that after about 3-4 seconds, the error was on the order of the reported position being several meters away from the actual position.
Left to run at that rate, after about a minute the reported position could easily be several hundred meters off.
Now, I'm sure you could code something better thought out than what I was doing back then, but the drift is huge.
Using an accelerometer as a PRIMARY data source for position tracking is a really bad idea.
At best it can give you a bit of supplementary data, but if you're thinking it can be the primary source, and you use the camera (or whatever else there is) to correct it, you're being way too optimistic about just how huge the error is for acceleration data.
Remember:
You are trying do determine a position in space;
Starting from accelerometer data,
You first have the error in reported acceleration.
You then need to find the integral of the acceleration to get the velocity,
Then you need to integrate again to find the position.
Each integration step, roughly speaking creates an exponential increase in the amount of error.
Not only that, but the error is cumulative. So, if the original error in reported acceleration is +-0.1 m/s^2, then the potential error in reported velocity is at least +-0.2 m/s, but will gradually increase. For a velocity calculated from 3 values, it's +-0.3 m/s, for 10 values it's +-1 m/s - And at this point you don't even have a position value yet...)
Trying to use acceleration data as your core positional input, in short, is a horrible idea.
If you're going to do this at all, it would be far safer to do the reverse;
Start from known positional data. (provided by a camera of some sort, or whatever other means), then use acceleration data to predict the motion between position updates.
However, how much you'd gain from doing this is questionable, and closely related to the camera's framerate and other issues. - The framerate of the camera determines how much time you'd have to spend performing acceleration based tracking; Which in turn figures in to how much uncertainty you will have to work with.
The uncertainty for anything beyond trivial amounts of time becomes a major issue though; - if it didn't there would be a lot of position tracking applications around already, because accelerometers are all over the place these days
from KuraIthys in this topic : viewtopic.php?f=20&t=767&p=9060&hilit=integrate#p9060 - geekmasterProtege
"Tgaud" wrote:
...
from KuraIthys in this topic : viewtopic.php?f=20&t=767&p=9060&hilit=integrate#p9060
Look on the next page of that same topic, and I explained there why and how such constrained accelerometer-based tracking is possible:
viewtopic.php?f=20&t=767&start=20#p9079
There are also newer "free walking" methods that map out a building, learning where the corridors and doors are from averaged data, and constraining your accelerometer position results to those allowed paths. You could consider it a sort of a large-scale room (or building) sided "gesture recognition" system.
And the important thing is that people are actually DOING this stuff, and it works, so all the explanation about how you cannot make it work does not make it impossible. - edziebaHonored GuestYou can constrain walking movements because human passive-dynamic walking can be modeled well as a pair of pendulums (because we naturally like to walk in the most efficient manner). Additionally, for much of the step the sensor is held stationary on the ground, allowing for tracking to be produced on a variation of a calculable path between fixed points, and these points strung together using absolute orientation data to estimate the path. It breaks down if you try and run, sidestep, crawl, etc.
Seated head-movement is NOT comparable to walking movement. Without a way to periodically set the sensor to a known position, you CANNOT make assumptions about where the head is in order to correct the massive drift inherent in MEMS accelerometers (and gyros, due to sensor fusion to remove the g vector), and the head does not move in regular and calculable motions.
With existing hardware, positional tracking with purely inertial measurement is simply not viable outside very specific scenarios involving constrained motion. The hardware to do inertial navigation over long timeframes ('long' here being above a few seconds) is not cheap, compact or commercially available (and definitely covered under ITAR).
For INS to be a viable tracking option, new hardware needs to become cheaply available (e.g. on-chip ring-laser gyros), or existing hardware must be used to augment an absolute positioning method. - geekmasterProtegeFor typical gamer use, you can make an assumption about where the AVERAGE position is (sitting upright or standing upright). There will be multiple places where the user stops moving his head to get a clear view (no motion blur), such as upright, or with head or hips (a tipping point), or leaning forward with head above knees (another tipping point). The center of the outer range of motion can be used as a recalibration point, and so can all the other stable "not moving" positions when they are detected.
Essentially, you just need to use gesture (body posture) recogition that can be learned over the range of motion. Each time the head (mostly) stops moving (very) near one of these points, set the velocity to zero and clamp the position to that know location.
Although this is a stretch from the documents I have read, I believe in this idea, and my beliefs carry a lot of power... I share my ideas so that others may use them too. - RabbitHonored GuestIf you're sitting down and moving about a level using a gamepad then position drift wouldn't be a major concern, would it? Your position is exactly where you are in the game (as long as your player height is kept ok and centred around standing eye-level).
I suspect OP is only talking about accelerometer readings to get it reacting nicely to those small movements like when you lean forward or shift sideways a little. And when I say only, I mean the range of movement, I think it would add a tremendous amount to the sense of immersion if these little shifts in perspective were able to be added in. Definitely worthwhile if we can get something from the existing sensors in the dev kit.
Best example I can think of is the TED talk by Jonny Lee where he does it with a Wii remote (IR tracking): http://www.ted.com/talks/johnny_lee_demos_wii_remote_hacks.html at around 3:30. The difference really jumps out. - geekmasterProtege
"Rabbit" wrote:
... I suspect OP is only talking about accelerometer readings to get it reacting nicely to those small movements like when you lean forward or shift sideways a little. And when I say only, I mean the range of movement, I think it would add a tremendous amount to the sense of immersion if these little shifts in perspective were able to be added in. Definitely worthwhile if we can get something from the existing sensors in the dev kit. ...
That is EXACTLY what I was talking about (in full agreement with the OP). The OP would like IF it can be done. I say it CAN be done. Others here say it cannot be done because they tried it and failed. I have provided references and links to others who have solved this problem well enough for our purposes, in a constrained environment. And yet, there are persistent doubters, just like in other threads where this was discussed.
I agree that seated motion is not the same as walking (with a foot-mounted acellerometer). But that constrains the range of motion even more. The skeleton is anchored to the buttox, which is attached to a chair. That works in our benefit. Even standing is not a problem, because you are limited by your need to maintain balance. We just need to adapt to those limits.
As I mentioned in various posts, I already have some rudimentary constrained positional tracking working. I have some tuning to do, and I need to integrate it into some sort of an app (perhaps the Tuscany demo)... I do not know why people keep disagreeing about the possibility or practicality of things that I have already done, or even things that I that I already know how to do, for that matter... - edziebaHonored Guest
"geekmaster" wrote:
The problem is that accelerometer drift is massive and rapid (see the previously mosted Google Tech Talk. Over 8 metres of drift within seconds!). You can't wait until you can make a reasonable guess that the users head is in a certain position, you need to re-set to a rigidly known location rapidly and regularly.
For typical gamer use, you can make an assumption about where the AVERAGE position is (sitting upright or standing upright). There will be multiple places where the used stops moving his head to get a clear view (no motion blur), such as upright, or with head or hips (a tipping point), or leaning forward with head above knees (another tipping point).
You could potentially train users to regularly move their head upright and level and press a button to re calibrate, but that's hardly a user-friendly solution. You could add an automatic outside-in camera tracker or a magnetic tracker that watches for the head to be in a certain location and resets the drift, but that that point you may as well just be using your absolute tracking system instead.I agree that seated motion is not the same as walking (with a foot-mounted acellerometer). But that constrains the range of motion even more. The skeleton is anchored to the buttox, which is attached to a chair. That works in our benefit. Even standing is not a problem, because you are limited by your need to maintain balance. We just need to adapt to those limits.
A seated user is not constrained. The range of movement you can perform even without minor lifting out the the chair is enormous. Movements are not regular or rigidly pathed, are not repeating, and while there is a natural zero position, it has no distinctive characteristics to allow it to be distinguished from IMU data alone (unlike in the foot-mounted case, where all gyros would read zero, and the accelerometers would only read a stationary g-vector in a known direction), as you cannot keep your head perfectly stationary and do not naturally return your head to centre between movements. You need an additional external input to provide the reset data, and this means either manual re-setting or an absolute tracking system (making the inertial system largely redundant).I have provided references and links to others who have solved this problem well enough for our purposes
With military-grade and/or custom manufactured hardware.
Yes, free-moving inertial navigation can be done, and has been done for as long as viable ICBMs have been deployed. It is possible. What is not possible is doing so with commercially available MEMS devices at a price affordable to someone outside a government research lab or large university.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 12 months ago
- 2 years ago