Forum Discussion
Blueshock
12 years agoHonored Guest
Hydra Calibration
Hi all,
Just interested in you guys' experience with implementing the Hydra to work alongside your Rift and how you are currently handling the calibration when you can only see the virtual world.
Currently I have an initial calibration of arms stretched straight forward and then to the sides to get the length of the player's arms and the position of the player in relation to the base station (which I then map to the avatar's dimensions). However, I haven't seen anyone do this on any videos so thought maybe I was missing a trick?
Another thought was to mount the base station to underneath the player's seat, kind of like the Holodeck guys, but without the need to wear a helmet..!
Anyway, if anyone is playing with the Rift and Hydra, would be good to talk (I'm currently using Unity).
Cheers
Just interested in you guys' experience with implementing the Hydra to work alongside your Rift and how you are currently handling the calibration when you can only see the virtual world.
Currently I have an initial calibration of arms stretched straight forward and then to the sides to get the length of the player's arms and the position of the player in relation to the base station (which I then map to the avatar's dimensions). However, I haven't seen anyone do this on any videos so thought maybe I was missing a trick?
Another thought was to mount the base station to underneath the player's seat, kind of like the Holodeck guys, but without the need to wear a helmet..!
Anyway, if anyone is playing with the Rift and Hydra, would be good to talk (I'm currently using Unity).
Cheers
17 Replies
- KuraIthysHonored GuestI had some thoughts on the matter, but since I can't get hold of either a rift OR a hydra right now, it's a bit difficult to say.
(I have both on order, but the Rift is... Well, you know that if you read these forums, and the hydra is on back-order. At least, for the country I'm in it is...)
I don't know quite enough about the hydra to say with certainty what you need to do with it in terms of calibration, and without one to test it's all speculation.
However, based on reading a small bit about how it functions, this process came to mind:
(I've stated this elsewhere, but I might as well repeat it.)
Ideally you'd have some kind of human-shaped avatar demonstrate the poses while the calibration is in progress, just so people are clear on what to do. But that aside, I came up with the following:
(Whether this works or not is something I cannot verify just yet.)
While holding the controllers in your hands, do the following:
(pull the triggers or something when you think you've got the positions right.)
1. Place the controllers near your bellybutton.
2. Place your hands down by your side
3. Place your hands touching your shoulders.
(4. optionally, depending on what you're doing, place them against the VR display, but this isn't calibration of the controllers so much as giving you an idea of where the HMD is in relation to the controllers)
The reasons for this choice is a combination of things. Firstly, almost everyone has a bellybutton, and can find it's approximate location without needing to see it.
On top of this, the bellybutton is located physically close to a human body's centre of mass, so it gives you a good reference location for where the body would be in space.
By placing your hands against your shoulders, you can determine several things. Again, the calibration is aided by the fact that most people can find their shoulders without needing to be able to see them.
The distance between your shoulders, combined with the location of your bellybutton gives a reasonable approximation of how large your torso is. It also tells you how broad your shoulders are.
And, given that if you're holding your body still, arm movements extend out from the shoulder, in combination with the location taken with your hands down by your sides, you can calculate a reasonable approximation of how long a person's arms are.
(On top of that, though not incredibly accurate, the location of the bellybutton also serves as a hint as to where the elbows are)
This seems, in theory like a pretty useful set of information to have, and all 3 locations are ones that should be reasonably simple to determine without being able to see.
However, without any testing I don't know if there's anything I'm missing. (For instance, do you need to know anything about the location of the base-station, or would such a calibration process already provide enough information? - It all rather depends on precisely how the tracking system actually works.) - edziebaHonored GuestA bonus for the first to implement "heads, shoulders, knees and toes" as a calibration method!
- drashHeroic ExplorerKuralthys, thanks for your thoughts. That's a very interesting way to look at it, to use the Hydra to build up an idea of what the player's body looks like.
I did play with a Hydra-enabled scene in Unity a while back (I forgot which one now) that made you move your right hand all the way to the upper left corner of your visible screen (and vice versa for the other hand), and some other similar stuff. That approach to calibration seemed to be screen-oriented so I'm wondering if anything like that needs to be combined with what you outlined above."KuraIthys" wrote:
However, without any testing I don't know if there's anything I'm missing. (For instance, do you need to know anything about the location of the base-station, or would such a calibration process already provide enough information? - It all rather depends on precisely how the tracking system actually works.)
I know that the Hydra stops mapping 1-1 (it gets pretty warped) if you get your hands pretty close to the base station (within a foot), but I admit I haven't updated to use Sixense's Hydra drivers so I'm wondering if any of that changes from driver to driver -- anyone know? - KuraIthysHonored Guest
"edzieba" wrote:
A bonus for the first to implement "heads, shoulders, knees and toes" as a calibration method!
Lol. That's definitely an amusing thought. I think you'd run into a few headaches doing that, but it's certainly easy to remember... XD - KuraIthysHonored GuestI've been digging into the sixsense SDK, and it's... Not exactly what I'd call well documented.
(it doesn't provide much in the way of technical specifications and information about how the system works for instance.
It doesn't even mention basic operating principles.)
It does however throw up a few interesting points.
The hydra is the only commercially produced device so far, but the SDK mentions several features that relate to the original development models, but not the hydra.
Here are some things the development models (and thus, potential future hardware) could do:
-Up to 4 controllers per base-station (Hydra has 2)
- Up to 4 base stations, with the SDK capable of being extended to allow more. However, base stations have a range of about 10 meters, and must be using different frequencies to avoid interference. (The hydra is on a fixed, non-adjustable frequency, so multiple units will interfere.)
- RGB control of the base station lighting. about 64 shades were possible with the original dev kit models. (Again, the hydra cannot do this - it glows in a single colour only)
- Wireless controllers. (Again, the hydra omits these).
- Rumble feature. (Another omission in the hydra...)
Looks like a lot of features originally designed into the sixsense system were removed from the hydra... To cut costs I guess.
But moving back to the actual topic, the SDK mentions several things of note in relation to calibration of a hydra.
First, there is an optional correction function built into the hardware, with adjustable parameters.
This appears to be nessesary because the behaviour at distance from the base station is different to that at close range.
The distances at which it switches models, and the transition zone between is adjustable in the SDK.
I don't think this is generally something to worry about unless you get peculiar errors though.
Second, the sdk includes an explicit calibration function, but notes that it is deprecated, and unnecessary on units in which the base station includes stands in which you can place the controller.
(Which explains why you don't see people bothering with it.)
However, assuming the base station automatic calibration fails, here is what you would need to do:
(keep in mind that the calibration routine turns on automatically when you place the controllers in their dock on the base station, and automatically performs both steps. Calibration remains valid as long as the system isn't turned off at some point along the way.)
Point both controllers in the direction of the base station, and pull the triggers individually in a specific order.
(or some other button). The reason for doing the controllers one at a time, is that the hydra cannot tell left and right apart. You need to inform the system somehow of which is the left-hand controller, and which is the right.
You do not have to be particularly accurate with your pointing. You only need to ensure that you are pointing more towards the base station than away from it.
The reason for the calibration step is also worth mentioning, because it's not quite what you might expect.
Basically, whatever is being used to determine the controller's positions is symmetrical; The system has 2 potential locations for a controller at all times, and cannot actually determine on it's own which is correct.
If you move a controller below the axis of base station, without taking this into account, the entire coordinate system would now be upside down.
The calibration routine exploits the fact that if the controller is pointing towards the base station, One of these 2 reported coordinates will be pointing towards the base station, and one will be pointing away from it.
The one pointing towards it is the correct position, while the other is not.
Once calibrated, the device infers which of the two positions is correct based on how probable the change is. (Because the two reported positions are in directly opposing positions relative to the base station, it is highly improbable that you would move from one position to the other in the time between position updates. - thus, whatever position is closer, can be assumed to be the correct one. - this is also why the system would need recalibration if the power to it turned off at any point while it was in use.)
Basically, that explains about as much as there is to know about calibrating the hydra. (and why it actually isn't a required step in most games that use it.)
I'm guessing more elaborate calibration might help the accuracy, but it would also be pretty unwieldy. (mapping the relation between physical position and virtual position would probably improve accuracy a lot, but not only is that difficult to measure, it'd probably change every time the base station is moved.) - drashHeroic Explorer
"KuraIthys" wrote:
and pull the triggers individually in a specific order.
(or some other button). The reason for doing the controllers one at a time, is that the hydra cannot tell left and right apart. You need to inform the system somehow of which is the left-hand controller, and which is the right.
Oh wow, that's really great to know -- I've been reliably putting my Hydra controllers into the right spots on the base station so far, but if you do put them back backwards, software should automatically figure out that you've switched them!
I've been working with the Hydra plugin for Unity for a while now. While it does not automatically switch hands if you switch controllers (EDIT: Apparently the new official Sixense Hydra plugin in the Unity Asset Store does detect which hand is which), it does seem to have a pretty reliable coordinate system built around the base station being at the origin. The Hydra reports its absolute positioning (relative to the base station) using some unknown unit of measure, so based on suggestions in this thread I now have a simple calibration routine at the start of my current project that goes like this:
-------------
** Hydra Calibration **
Close one eye during calibration, starting now. (because I'm not mapping this text onto a 3D plane yet)
Stay seated in your intended playing position.
Ensure the Hydra base station is straight ahead.
Press button 1 on either hand to continue.
** Step 1 **
Hold both arms straight out in front of you,
move your arms together with thumbs touching,
slightly raise your hands to eye-level,
and then press button 1 on either hand.
** Step 2 **
Bring your elbows to your sides,
touch your left shoulder with your left hand,
touch your right shoulder with your right hand,
and then press button 1 on either hand.
** Step 3 **
Hold both arms straight out to the sides,
then press button 1 on either hand.
-------------
At each step I record the positions that the hydra controllers report, and at the end I use them to figure out the appropriate "sensitivity" values to stick into the hydra controller plugin (to convert from Hydra units to my Unity units). I thought I was going to have to calculate offsets so that the virtual hands show up where they would in real life, but it's already close enough that I don't have to do anything else after setting the sensitivity values. As a result, I'm considering ditching Step 2 (touch shoulders) and replacing with a step to have the user stick their hands straight up into the air to get an accurate y-axis sensitivity reading, but I'm just averaging the x-axis and z-axis sensitivity readings to get that for now. - KuraIthysHonored GuestThat seems like a reasonable process.
I would however make one suggestion. Because I'm left-handed, I tend to pick up on things like this more than average, but even ignoring that, the hydra (in it's normal intended use) is a two-handed controller, but the controller you have in each hand would appear to be identical.
Ergomically then, if you're using the buttons, they'll be on opposite sides of your hand.
Since the controller has button 1 and 3 on the left, and 2 and 4 on the right,
The following issue comes up when using the controllers:
Buttons 1 and 3 are easier to reach when using a controller right-handed. Buttons 2 and 4 are easier to reach left-handed.
A reasonable conclusion then, is that if both hands perform the same kind of function for the same button, between the left and right hand you should swap the functions of button 1 and 2, and 3 and 4.
(so for left-hand use, button 2 takes on the function of button 1, and button 4 takes on the function of button 3).
This is more natural than forcing an awkward move with your left hand.
Also, for calibration purposes, it has the benefit that you can infer with moderate reliability from which button was pressed, which hand a controller is in.
(Buttons 2 and 4 being easier to use left-handed, and 1 and 3 easier to use right-handed.)
Just something to think about... XD - drashHeroic ExplorerGREAT suggestions. I will have to start thinking about left-handedness for everything I do from now on.
- BlueshockHonored GuestGreat stuff and very useful info, feel a little stupid now for using the raw data to then find the Sixense asset in the unity asset store.. oh well, was a good learning experience at least!
- jjoudreyHonored GuestI know this solution won't work for everyone, but here's how I "calibrate" my hydra input.
I also have a Kinect Unit. When the app launches the calibration routine waits for the users hands to be more than 20cm apart (which they almost always are). It then calculates the midpoint between the kinect hands and the Hydra hands and uses this to determine how far away (and at what level) the hydra base station is compared to the kinect. Then it uses the vector from the right hand to the left in both the kinect and hydra to determine their relative orientations.
This allows me to place a virtual hydra base station in my game world where all the outputs of this base station line up with the real player position.
This is probably a bit much for just calibrating a hydra, but for those of us building our ultimate VR experience, it does allow for a quick & painless calibration routine.
This method will quickly highlight the difference in hydra hand position and kinect hand position as you move around. There are four things I'm considering now and I haven't made up my mind which to persue.
1. Recalibrate every frame, it's computationally cheap after all, but the animation could jump if it recalibrates too much in a single frame.
2. Keep the initial calibration perminently, it shouldn't degrade over time and the animation would be consistent but inaccurate in some positions.
3. Continue recalibrating but blend the new values in, causing a smooth "lag" to the calibration as you move but at least it'd be smooth and accurate after it settles.
4. I know this is the right option, but I don't know if I have time now. Move the kinect hand and the hydra thoughout the potential play space and sample it's error at many positions and many orientations to calculate a calibration field that might give accurate results in all cases (depending on the nature of the hydra's inaccuracy).
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 12 years ago
- 12 years ago
- 4 years ago