Forum Discussion
ckoeber
12 years agoHonored Guest
Reading Sensor data with GLM and OpenGL ...
Hello, So I have my application ported to the rift using the SDK and I can read sensor data fine. It's calculating the sensor data and converting it over to GLM that is giving me problems. I...
DoZo1971
12 years agoExplorer
Cameras are hard.
My "big" insight was to separate concerns. Don't try to work on the ModelView matrix too soon directly. I have some "leading" member variables in my classes that are updated first. For the Oculus camera that would be the <yaw, pitch, roll> vector and the position. Those variables are updated via methods MoveYaw(), SetYaw(), MoveFront(), MoveBack(), etc. Then at the end the ModelView matrix is created based on these and passed on to OpenGL. In your case, when you want both the mouse and the sensor to influence the orientation you can not avoid saving the "user" <yaw, pitch, roll> vector separately (otherwise this information will get lost after the Oculus orientation is incorporated). Then I would guess that you only have to multiply the two (the ModelView matrix from the Rift and the user one) at the end. Before working on your camera architecture you could start with your basic (no Oculus) code and just multiply the modelview matrix from that with the modelview matrix deducted from the Oculus.
By the way. In your code I see a combined "ProjectionModelViewMatrix". Why is that? The Projection and the ModelView matrix are entities treated separately by OpenGL.
Thanks,
Daniel
My "big" insight was to separate concerns. Don't try to work on the ModelView matrix too soon directly. I have some "leading" member variables in my classes that are updated first. For the Oculus camera that would be the <yaw, pitch, roll> vector and the position. Those variables are updated via methods MoveYaw(), SetYaw(), MoveFront(), MoveBack(), etc. Then at the end the ModelView matrix is created based on these and passed on to OpenGL. In your case, when you want both the mouse and the sensor to influence the orientation you can not avoid saving the "user" <yaw, pitch, roll> vector separately (otherwise this information will get lost after the Oculus orientation is incorporated). Then I would guess that you only have to multiply the two (the ModelView matrix from the Rift and the user one) at the end. Before working on your camera architecture you could start with your basic (no Oculus) code and just multiply the modelview matrix from that with the modelview matrix deducted from the Oculus.
By the way. In your code I see a combined "ProjectionModelViewMatrix". Why is that? The Projection and the ModelView matrix are entities treated separately by OpenGL.
Thanks,
Daniel
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device