Forum Discussion
ckoeber
12 years agoHonored Guest
Reading Sensor data with GLM and OpenGL ...
Hello,
So I have my application ported to the rift using the SDK and I can read sensor data fine.
It's calculating the sensor data and converting it over to GLM that is giving me problems.
I looked at some examples but I am not clear how I can incorporated the position and orientation from the device with the world coordinates of where the person is at?
Thank you for your time.
Here is my current wacky code:
So I have my application ported to the rift using the SDK and I can read sensor data fine.
It's calculating the sensor data and converting it over to GLM that is giving me problems.
I looked at some examples but I am not clear how I can incorporated the position and orientation from the device with the world coordinates of where the person is at?
Thank you for your time.
Here is my current wacky code:
ovrEyeType l_Eye = (*MainRiftDeviceDesc).EyeRenderOrder[l_EyeIndex];
ovrPosef l_EyePose = ovrHmd_BeginEyeRender((*MainRiftDevice), l_Eye);
glViewport(MainEyeRenderDesc[l_Eye].Desc.RenderViewport.Pos.x,
MainEyeRenderDesc[l_Eye].Desc.RenderViewport.Pos.y,
MainEyeRenderDesc[l_Eye].Desc.RenderViewport.Size.w,
MainEyeRenderDesc[l_Eye].Desc.RenderViewport.Size.h);
/*
How can I convert this to a GLM::mat4???
*/
OVR::Matrix4f l_ProjectionMatrix = ovrMatrix4f_Projection(
l_EyeRenderDesc[l_Eye].Desc.Fov, 0.3f, 10000.0f, true);
/*
End Question.
*/
glm::quat CurrentOrientation = glm::quat((l_EyePose.Orientation.x * -1.0),
(l_EyePose.Orientation.y * -1.0f), (l_EyePose.Orientation.z * -1.0), l_EyePose.Orientation.w);
glm::vec3 CurrentEulerAngles = glm::eulerAngles(CurrentOrientation);
CurrentEulerAngles.x += CurrentCameraViewingSettings.Pitch;
CurrentEulerAngles.y += CurrentCameraViewingSettings.Yaw;
CurrentEulerAngles.z += CurrentCameraViewingSettings.Roll;
DirectionOfWhereCameraIsFacing = glm::normalize(CurrentEulerAngles);
PositionOfEyesOfPerson += CameraPositionDelta;
CenterOfWhatIsBeingLookedAt = PositionOfEyesOfPerson + DirectionOfWhereCameraIsFacing * 1.0f;
/*
This is to account for the user's mouse
*/
CurrentCameraViewingSettings.Yaw *= 0.5f;
CurrentCameraViewingSettings.Pitch *= 0.5f;
CameraPositionDelta = CameraPositionDelta * 0.8f;
View = glm::lookAt(PositionOfEyesOfPerson, CenterOfWhatIsBeingLookedAt, DirectionOfUpForPerson);
Model = glm::mat4(1.0f);
ModelViewProjectionMatrix = Projection * View * Model;
ProjectionViewMatrix = Projection * View;
8 Replies
- cyberealityGrand ChampionYou should be able to access the underlying data structure to convert it to the format you need.
For example:l_ProjectionMatrix.M[0][0], l_ProjectionMatrix.M[0][1], ...
I found some code on StackOverflow that looks like it should allow you to create a glm::mat4 from an array,float aaa[16] = {
1, 2, 3, 4,
5, 6, 7, 8,
9, 10, 11, 12,
13, 14, 15, 16
};
glm::mat4 bbb;
memcpy( glm::value_ptr( bbb ), aaa, sizeof( aaa ) );
Hope that helps. - ckoeberHonored GuestThanks, I'll give that a shot and report back.
- jhericoAdventurerThere is code in my library to convert the primary Oculus C API math types to their GLM equivalents:
Specifically:
static inline glm::mat4 fromOvr(const ovrMatrix4f & om) {
return glm::transpose(glm::make_mat4(&om.M[0][0]));
}
static inline glm::vec3 fromOvr(const ovrVector3f & ov) {
return glm::make_vec3(&ov.x);
}
static inline glm::uvec2 fromOvr(const ovrSizei & ov) {
return glm::uvec2(ov.h, ov.w);
}
static inline glm::quat fromOvr(const ovrQuatf & oq) {
return glm::make_quat(&oq.x);
}
static glm::mat4 Rift::fromOvr(const ovrPosef & op) {
return glm::mat4_cast(fromOvr(op.Orientation)) * glm::translate(glm::mat4(), Rift::fromOvr(op.Position));
} - ckoeberHonored GuestThanks for the help. I guess my main problem is really theoretical; I am not exactly sure how to alter my working code for basic walk throughs to incorporate the rift,
Here is my working code that DOESN'T use the rift.
I want to maintain the control that a person has with the mouse against the camera so I am not sure how to incorporate the Rift's sensor data against that.
The CurrentCameraViewingSettings.Yaw and Pitch values are altered when the mouse is moved by the user and then slowly drops to zero.
I'd love to incorporate the position data as well.
I guess Math isn't my strong suite...
void GlMaintenance::FastUpdateCamera() {
DirectionOfWhereCameraIsFacing = glm::normalize(CenterOfWhatIsBeingLookedAt - PositionOfEyesOfPerson);
switch (CurrentCameraMode) {
case (ModeOfCamera::ORTHOGONAL) :
break;
Projection = glm::ortho(CurrentOrthoParameters.LeftPlane, CurrentOrthoParameters.RightPlane,
CurrentOrthoParameters.BottomPlane, CurrentOrthoParameters.TopPlane);
case (ModeOfCamera::PERSPECTIVE) :
default:
CameraAxis = glm::cross(DirectionOfWhereCameraIsFacing, DirectionOfUpForPerson);
CameraQuatPitch = glm::angleAxis(CurrentCameraViewingSettings.Pitch, CameraAxis);
CameraQuatYaw = glm::angleAxis(CurrentCameraViewingSettings.Yaw, DirectionOfUpForPerson);
CameraQuatRoll = glm::angleAxis(CurrentCameraViewingSettings.Roll, CameraAxis);
CameraQuatBothPitchAndYaw = glm::cross(CameraQuatPitch, CameraQuatYaw);
CameraQuatBothPitchAndYaw = glm::normalize(CameraQuatBothPitchAndYaw);
DirectionOfWhereCameraIsFacing = glm::rotate(CameraQuatBothPitchAndYaw, DirectionOfWhereCameraIsFacing);
PositionOfEyesOfPerson += CameraPositionDelta;
CenterOfWhatIsBeingLookedAt = PositionOfEyesOfPerson + DirectionOfWhereCameraIsFacing * 1.0f;
if (CameraProcessingThreadStarted == true) {
CurrentCameraViewingSettings.Yaw *= 0.7999f;
CurrentCameraViewingSettings.Pitch *= 0.7999f;
CameraPositionDelta = CameraPositionDelta * 0.90f;
}
else
{
CurrentCameraViewingSettings.Yaw *= 0.5f;
CurrentCameraViewingSettings.Pitch *= 0.5f;
CameraPositionDelta = CameraPositionDelta * 0.8f;
}
break;
}
View = glm::lookAt(PositionOfEyesOfPerson, CenterOfWhatIsBeingLookedAt, DirectionOfUpForPerson);
ModelViewProjectionMatrix = Projection * View * Model;
ProjectionViewMatrix = Projection * View;
} - ckoeberHonored GuestOK, I finally have position and orientation working with the Rift. I will ask a separate question on quality but at least the basics are working!!!
Thank you for all of your help, folks.
Much appreciated. - DoZo1971ExplorerCameras are hard.
My "big" insight was to separate concerns. Don't try to work on the ModelView matrix too soon directly. I have some "leading" member variables in my classes that are updated first. For the Oculus camera that would be the <yaw, pitch, roll> vector and the position. Those variables are updated via methods MoveYaw(), SetYaw(), MoveFront(), MoveBack(), etc. Then at the end the ModelView matrix is created based on these and passed on to OpenGL. In your case, when you want both the mouse and the sensor to influence the orientation you can not avoid saving the "user" <yaw, pitch, roll> vector separately (otherwise this information will get lost after the Oculus orientation is incorporated). Then I would guess that you only have to multiply the two (the ModelView matrix from the Rift and the user one) at the end. Before working on your camera architecture you could start with your basic (no Oculus) code and just multiply the modelview matrix from that with the modelview matrix deducted from the Oculus.
By the way. In your code I see a combined "ProjectionModelViewMatrix". Why is that? The Projection and the ModelView matrix are entities treated separately by OpenGL.
Thanks,
Daniel - ckoeberHonored Guest
"DoZo1971" wrote:
Cameras are hard.
My "big" insight was to separate concerns. Don't try to work on the ModelView matrix too soon directly. I have some "leading" member variables in my classes that are updated first. For the Oculus camera that would be the <yaw, pitch, roll> vector and the position. Those variables are updated via methods MoveYaw(), SetYaw(), MoveFront(), MoveBack(), etc. Then at the end the ModelView matrix is created based on these and passed on to OpenGL. In your case, when you want both the mouse and the sensor to influence the orientation you can not avoid saving the "user" <yaw, pitch, roll> vector separately (otherwise this information will get lost after the Oculus orientation is incorporated). Then I would guess that you only have to multiply the two (the ModelView matrix from the Rift and the user one) at the end. Before working on your camera architecture you could start with your basic (no Oculus) code and just multiply the modelview matrix from that with the modelview matrix deducted from the Oculus.
By the way. In your code I see a combined "ProjectionModelViewMatrix". Why is that? The Projection and the ModelView matrix are entities treated separately by OpenGL.
Thanks,
Daniel
Thank you. What was your math that allowed you to get the direction vector from the orientation quaterunion?
I want to compare with mine.
I will also post updated code shortly so that folks can benefit while incorporating your suggestions. - DoZo1971ExplorerWell, I fill the yaw, pitch, roll directly from the sensor:
void
c_OculusRift::GetYawPitchRoll(float* p_Yaw, float* p_Pitch, float* p_Roll)
{
if (m_Hmd)
{
OVR::Quatf l_Orientation = OVR::Quatf(m_EyePose.Orientation);
l_Orientation.GetEulerAngles<OVR::Axis_Y, OVR::Axis_X, OVR::Axis_Z>(p_Yaw, p_Pitch, p_Roll);
}
else
{
*p_Yaw = 0.0f;
*p_Pitch = 0.0f;
*p_Roll = 0.0f;
}
}
l_CurrentCamera->SetYawPitchRoll(l_Yaw*MY_RADTODEG, l_Pitch*MY_RADTODEG, -l_Roll*MY_RADTODEG);
Inside l_CurrentCamera, whenever the yaw, pitch, and/or roll is changed (or the position for that matter) I update the ModelView matrix (note I don't use GLM, but my own matrix class):
void
c_CameraQuake::CalculateModelView(void)
{
c_Matrix4f l_Matrix; // Auto fill with Identity...
l_Matrix.Rotate(+m_Roll.GetValue(), 0.0f, 0.0f, 1.0f);
l_Matrix.Rotate(-m_Pitch.GetValue(), 1.0f, 0.0f, 0.0f);
l_Matrix.Rotate(-m_Yaw.GetValue(), 0.0f, 1.0f, 0.0f);
if (GetOrientationModeModel()->GetChoice()==e_Zup) l_Matrix.Rotate(-90.0f, 1.0f, 0.0f, 0.0f);
l_Matrix.Translate(-GetPositionModel()->GetX(), -GetPositionModel()->GetY(), -GetPositionModel()->GetZ());
GetModelViewModel()->SetData(l_Matrix);
}
Then, since the ModelView matrix is always correct I can move forward (for instance) in the direction of what you call the direction vector and just update the position:
void
c_CameraCartesian::MoveFront(float p_Distance)
{
c_Matrix3f l_Inverse = GetModelViewModel()->Transpose().ToMatrix3f();
m_Position.AddValue(l_Inverse*c_Vector3f(0.0f, 0.0f, -p_Distance));
}
Note the hierarchy of camera classes. I've made a clear fundamental distinction between a camera that orbits a fixed viewing point outside of the camera or a camera that orbits itself, like in a FPS game. Both have its uses. The Oculus camera I have implemented is of the latter type (c_CameraOculusRift is a c_CameraQuake is a c_CameraCartesian (is a c_Camera)).
Thanks,
Daniel
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device