Forum Discussion
Rock2000
12 years agoHonored Guest
Newbie questions
I'm playing around with the Oculus SDK and had some questions. Is there any documentation besides the short SDK_Overview doc where I can find more info, instead of bothering people with basic questions?
1) What units are used for the positional values returned from the Oculus, like the pose position? Meters?
2) With the DK1, how are the values of the pose Position determined, since it doesn't have the DK2 camera. I kind of thought the Position was a placeholder that would contain zeros for DK1, but would return the real values from DK2. But they are not zeros for DK1.
3) Do the eye Position values include the IPD offsets, or is IPD a separate positioning that I need to apply manually?
Thanks
Rock
1) What units are used for the positional values returned from the Oculus, like the pose position? Meters?
2) With the DK1, how are the values of the pose Position determined, since it doesn't have the DK2 camera. I kind of thought the Position was a placeholder that would contain zeros for DK1, but would return the real values from DK2. But they are not zeros for DK1.
3) Do the eye Position values include the IPD offsets, or is IPD a separate positioning that I need to apply manually?
Thanks
Rock
6 Replies
- cyberealityGrand Champion1) Meters, I believe.
2) Position in DK1 is estimated using a head/neck model.
3) The eye positions should be handled by the SDK, see the samples for details. - Rock2000Honored GuestBut in the samples, it looks like the eye's ViewAdjust value is applied by the sample's code and not hidden in the SDK. Am I reading it wrong?
I'm actually doing the rendering in a separate application, and I'm just trying to grab that rendered frame and render it to the Oculus as a single OpenGL texture across the whole screen. Something isn't looking right in stereo, and I'm not sure if the SDK is doing something with my textures (besides the distortion) or if I have an error in my rendering app.
Thanks - YbalridExpert ProtegeYou have a "position" vector from the pose from the SDK (alongside the Orientation), and the viewAdjust vector which take care of applying the corect translation for matching user's IPD apparently...
I have a strange behaviour with that ViewAdjust vector. It was reversed for the cameras: The left eye camera was in place of the right eye camera, and so for the right one. (I was calling for the correct eye, I've double check)
So, I take the oposite vector of the "ViewAdjust". Maybe it's a vector representing the translation of the world to get the correct "point of view" (I know, have to check the documentation, but the only reliable documentation I find was "read SDK source code, so I'm a little bit lazy to do it ^^"). It that case it's a normal thing to do since I mouve a "camera" object on my engine. I don't realy know how it works with OpenGL alone. (Spatial relativity? :mrgreen:)
Don't know if your problem is related with that. But in my case it was that. Check the X componant of the "ViewAdjust" vector, just to see. The axis is pointing on the right so normaly right camera has a positive X valule, left has a negative one (speaking of spatial the position of the camera) - Rock2000Honored GuestInteresting. Yeah the viewAdjust does seem backwards from what I expected, but maybe it's as you say.
My problem is that if I try to use any ViewAdjust in my rendering app, then on the Oculus the two images are not lined up correctly. If I use a 0 view adjust and render each frame, then the images align correctly. I'm having a hard time determining what the problem is.
Thanks - jhericoAdventurer
"Ybalrid" wrote:
I have a strange behaviour with that ViewAdjust vector. It was reversed for the cameras: The left eye camera was in place of the right eye camera, and so for the right one. (I was calling for the correct eye, I've double check)
It's not that they're reversed so much as you have to understand what coordinate system they're in. Some of the values provided by the SDK are in 'camera coordinates' meaning they represent the actual position of the user. The pose information is like this meaning that it's suitable for applying to a 'player position' matrix. The ViewAdjust members are in 'view coordinates', meaning they'd be suitable for applying to a modelview matrix. 'view coordinates' are the inverse of 'camera coordinates', which for a translation vector means being multiplied by -1. - YbalridExpert Protege
"jherico" wrote:
"Ybalrid" wrote:
I have a strange behaviour with that ViewAdjust vector. It was reversed for the cameras: The left eye camera was in place of the right eye camera, and so for the right one. (I was calling for the correct eye, I've double check)
It's not that they're reversed so much as you have to understand what coordinate system they're in. Some of the values provided by the SDK are in 'camera coordinates' meaning they represent the actual position of the user. The pose information is like this meaning that it's suitable for applying to a 'player position' matrix. The ViewAdjust members are in 'view coordinates', meaning they'd be suitable for applying to a modelview matrix. 'view coordinates' are the inverse of 'camera coordinates', which for a translation vector means being multiplied by -1.
I've done that. Now I understand why I had that result, thanks :)
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 6 months ago
- 2 months ago
- 9 months ago