Forum Discussion
seac02developer
12 years agoHonored Guest
head movement issue
I'm trying to implement head movement to my Oculus Based OpenSG (an OpenGl graphic engine) application.
I noticed that head movement are too fast when i'm looking forward and it does not work when i turn my head over 60-70°degree far from the forward position.
What i'm doing is setting my camera Fov as suggested by the SDK, making two different cameras with -OVR_DEFAULT_IPD/2 and OVR_DEFAULT_IPD/2 sitance from x axis.
This is how i set up my cameras:
where fov is calculated as:
and this is my display loop:
where transcore is the Transformation Node placed as root of Scenegraph.
Where is my mistake??
I noticed that head movement are too fast when i'm looking forward and it does not work when i turn my head over 60-70°degree far from the forward position.
What i'm doing is setting my camera Fov as suggested by the SDK, making two different cameras with -OVR_DEFAULT_IPD/2 and OVR_DEFAULT_IPD/2 sitance from x axis.
This is how i set up my cameras:
//other boring stuff
//setting camera beacon
leftM.setTranslate(Vec3f(-(OVR_DEFAULT_IPD)/2,0,20));
rightM.setTranslate(Vec3f((OVR_DEFAULT_IPD)/2,0,20));
//code
//Setting cameras
beginEditCP(leftCamera);
leftCamera->setBeacon(leftCamBeacon);
leftCamera->setFov(OculusVR->getLeftEyeFov());
leftCamera->setNear(0.3);
leftCamera->setFar(100);
endEditCP(leftCamera);
//same stuff for the other one
beginEditCP(rightCamera);
rightCamera->setBeacon(rightCamBeacon);
rightCamera->setFov(OculusVR->getRightEyeFov());
rightCamera->setNear(0.3);
rightCamera->setFar(100);
endEditCP(rightCamera);
where fov is calculated as:
float Oculus::getLeftEyeFov()
{
return (atan(HMDDesc.DefaultEyeFov[0].UpTan ) + atan(HMDDesc.DefaultEyeFov[0].DownTan ));
//return (EyeRenderDesc[0].Desc.Fov);
}
float Oculus::getRightEyeFov()
{
return ( atan(HMDDesc.DefaultEyeFov[1].UpTan ) + atan(HMDDesc.DefaultEyeFov[1].DownTan ));
//return (EyeRenderDesc[1].Desc.Fov));
}
and this is my display loop:
ovrFrameTiming hmdFrameTiming = ovrHmd_BeginFrame(OculusVR->HMD, 0);
Posef movePose = ovrHmd_GetSensorState(OculusVR->HMD, hmdFrameTiming.ScanoutMidpointSeconds).Predicted.Pose;
for (int eyeIndex = 0; eyeIndex < ovrEye_Count; eyeIndex++)
{
OculusVR->eye = OculusVR->HMDDesc.EyeRenderOrder[eyeIndex];
OculusVR->eyeRenderPose[OculusVR->eye] = ovrHmd_BeginEyeRender(OculusVR->HMD, OculusVR->eye);
OVR::Matrix4f l_ProjectionMatrix = ovrMatrix4f_Projection(OculusVR->EyeRenderDesc[OculusVR->eye].Desc.Fov, 0.3f, 100.0f, true);
OVR::Quatf l_Orientation = OVR::Quatf(OculusVR->eyeRenderPose[OculusVR->eye].Orientation);
OVR::Matrix4f l_ModelViewMatrix = OVR::Matrix4f(l_Orientation.Inverted());
OVR::Quatf culo= movePose.Orientation;
// Pass matrici on to OpenGL...
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMultMatrixf(&(l_ProjectionMatrix.Transposed().M[0][0]));
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Translate for specific eye based on IPD...
glTranslatef(OculusVR->EyeRenderDesc[OculusVR->eye].ViewAdjust.x, OculusVR->EyeRenderDesc[OculusVR->eye].ViewAdjust.y,OculusVR->EyeRenderDesc[OculusVR->eye].ViewAdjust.z);
glMultMatrixf(&(l_ModelViewMatrix.Transposed().M[0][0]));
OSG::Quaternion quat = OVRtoOSGquat(culo);
quat.invert();
beginEditCP(scene);
m.setIdentity();
m.setRotate(quat);
transCore->setMatrix(m);
endEditCP(scene);
ovrHmd_EndEyeRender(OculusVR->HMD, OculusVR->eye, OculusVR->eyeRenderPose[ OculusVR->eye], &EyeTexture[ OculusVR->eye].Texture);
}
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
ovrHmd_EndFrame(OculusVR->HMD);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glUseProgramTROIA(0);
}
where transcore is the Transformation Node placed as root of Scenegraph.
Where is my mistake??
2 Replies
- jhericoAdventurer
Posef movePose = ovrHmd_GetSensorState(OculusVR->HMD, hmdFrameTiming.ScanoutMidpointSeconds).Predicted.Pose;
...
OculusVR->eyeRenderPose[OculusVR->eye] = ovrHmd_BeginEyeRender(OculusVR->HMD, OculusVR->eye);
You only need to apply one of these poses, not both. If you dig into the ovrHmd_BeginEyeRender function you'll see that it eventually calls ovrHmd_GetSensorState() and returns the predicted pose from there. So calling it yourself explicitly is unnecessary. By applying both transformations you're basically doubling the speed at which your rotate.
This is actually news to me, since I had completely overlooked the return value of ovrHmd_BeginEyeRender, and I'll need to update my examples. It's a shame that OVR doesn't provide any explicit OpenGL guidance. - seac02developerHonored GuestThank you Jericho, even if i forget to delete one of the two lines in the end i used just one of the two position tracked and as a result i had this problem. Basicly whatg i'm doing is using the inverse of the quaternion OculusVR->eyeRenderPose[OculusVR->eye].Orientation or movePose.Orientation and i'm using the inverse of that on the trasformation node on the top of my scengraph. What is strange is that all parameters (vertical fov and IDP) are taken directly from the sdk.
Where can i find your example?
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 4 years ago
- 11 months ago