Forum Discussion
hesham
13 years agoProtege
OpenGL Stereo Camera Setup and Convergence
Edit: I think that my baseline needs to change to get rid of the 0.5 up front, but everything else still broken:
I've been stuck for the past few days trying to get my cameras set up. I finally understand why I need to put all the eye releated transforms in the Projection transform and not the ModelView transform, but now everything seems to be either ultra-zoomed in or out. The other question I have is related to the application of the Rift orientation matrix which I apply before rendering everything else then apply a pushmatrix to render the world in there. It seems like even if I look straight down I can still see things that are directly ahead stretched out to the top of the monitor like I'm in a fun-house mirror. Seems like my FOV isn't really the 90 degrees that I thought. Here are some of the relevant snippets of code in case anyone has some suggestions.
Here is the projection code. What's odd is that modifying the near plane causes the zoom effect, larger values zoom out, smaller ones zoom in. If the formulas were right shouldn't it maintain the aspect ratio and zoom regardless of the near and far plane separation? The math looks right to me but for some reason it makes a huge difference. Also, I can't pick an IOD that works for moving the two rendered images to match the lens IPD on the Rift so nothing converges! At a distance the convergence seems ok, but as I walk closer in the VR world the distance widens so as to be unviewable. The walking and looking transforms are in the second code snippet.
Here is my ModelView code. It might be an effect of the bad projection matrix, but it always seems like the world is pivoting weirdly around my head orientation, so it isn't like I'm looking around but that everything is sort of rotating but when looking down I can still see the stuff that was directly infront of me but just sort of stretched out to infinity at the top of the monitor with an extreme perspective effect.
float const baselength = near_ * std::tan(M_PI * (fov_/2.0 / 180.0));
I've been stuck for the past few days trying to get my cameras set up. I finally understand why I need to put all the eye releated transforms in the Projection transform and not the ModelView transform, but now everything seems to be either ultra-zoomed in or out. The other question I have is related to the application of the Rift orientation matrix which I apply before rendering everything else then apply a pushmatrix to render the world in there. It seems like even if I look straight down I can still see things that are directly ahead stretched out to the top of the monitor like I'm in a fun-house mirror. Seems like my FOV isn't really the 90 degrees that I thought. Here are some of the relevant snippets of code in case anyone has some suggestions.
Here is the projection code. What's odd is that modifying the near plane causes the zoom effect, larger values zoom out, smaller ones zoom in. If the formulas were right shouldn't it maintain the aspect ratio and zoom regardless of the near and far plane separation? The math looks right to me but for some reason it makes a huge difference. Also, I can't pick an IOD that works for moving the two rendered images to match the lens IPD on the Rift so nothing converges! At a distance the convergence seems ok, but as I walk closer in the VR world the distance widens so as to be unviewable. The walking and looking transforms are in the second code snippet.
void stereoscopicPerspective(float fov_, float aspect, float near_, float far_, float stereo_offset, bool isLeft)
{
float const baselength =0.5 * near_ * std::tan(M_PI * (fov_ / 180.0));
if(isLeft) {
glFrustum(-aspect * baselength-stereo_offset , aspect * baselength-stereo_offset, -baselength, baselength, near_, far_);
} else {
glFrustum(-aspect * baselength+stereo_offset , aspect * baselength+stereo_offset, -baselength, baselength, near_, far_);
}
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
double IOD = 0.03;//0.5/2.0;
stereoscopicPerspective(89.0f, textureWidth/textureHeight, 0.005f, 1000.0f, IOD, isLeft);
double eyeOffset = (isLeft) ? -IOD : IOD;
glTranslatef(eyeOffset, 0.0f, 0.0f);
Here is my ModelView code. It might be an effect of the bad projection matrix, but it always seems like the world is pivoting weirdly around my head orientation, so it isn't like I'm looking around but that everything is sort of rotating but when looking down I can still see the stuff that was directly infront of me but just sort of stretched out to infinity at the top of the monitor with an extreme perspective effect.
double *orientation = getRiftOrientation();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
{
// head orientation
glMultMatrixd(orientation);
glRotated(loc.getXRotation(), 1, 0, 0);
glRotated(loc.getYRotation(), 0, 1, 0);
// translate world down so you have some height
glTranslated(0, -0.5, 0);
renderSkybox();
// walking position
glTranslated(loc.getXPosition(), loc.getYPosition(), loc.getZPosition());
glPushMatrix();
{
// rest of rendering code
9 Replies
- heshamProtegeNote: my update from the MTBS3D forums where I got some help in case this is useful for anyone else doing OpenGL Rift development. I would love any comments as to why this works or if this is wrong in any way as well.
You guys will love this :) All of it works now. tbowren mentioning the translation matrix multiplied by the projection implied to me translation needed to happen first. Here is my final rock solid tracking :) The key thing to note is that I'm doing the IOD first, followed by the projection matrix followed by the Rift rotation. When rotating I was getting sheer and I fixed that by dividing the physical screen width by 2 (I was doing 1280/800 instead of 640/800). One thing I will say that is painful is that it is very hard to get perfect focus because of the size of the pixels. I can read the text in ibex when up really close and there is only so much you can do when a letter is many physical pixels wide :) Either way thanks everyone! If anyone has any more comments or questions please let me know or ask here! I will say using a VR desktop can be painful at the current resolution.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glTranslated((i2 == 0) ? IOD: -IOD, 0, 0);
gluPerspective(90.0f, width/2.0/height, 0.01f, 1000.0f);
glMultMatrixd(orientation);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity(); - cyberealityGrand ChampionGreat that you got this working. Sorry I couldn't have been of more help.
- heshamProtegeThanks cyber!
For anyone that wants to read the other thread on this you can find it at http://www.mtbs3d.com/phpBB/viewtopic.php?f=140&t=17108&p=118370. - LakritzeHonored GuestThank you so much hesham for bringing this problem up here and actually solving it!
The Oculus Documentation teaches you about shifting the projection center to coincide with the center of the lens, but if you try to do this in the old openGL-style its not that obvious to see what to do.
Whatever here is my codesnippet, showing where to put the translation for the projection center and where to put the IOD-translation:void ApplyLeftFrustum()
{
// Set the Projection Matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glTranslatef(ProjectionCenterOffset, 0.0f, 0.0f);
gluPerspective(FOV, AspectRatio, NearClippingDistance, FarClippingDistance);
// Displace the world to right by half of the IOD
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(IOD/2, 0.0f, 0.0f);
}
void ApplyRightFrustum()
{
// Set the Projection Matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glTranslatef(-ProjectionCenterOffset, 0.0f, 0.0f);
gluPerspective(FOV, AspectRatio, NearClippingDistance, FarClippingDistance);
// Displace the world to left by half of the IOD
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(-IOD/2, 0.0f, 0.0f);
}
I hope this helps someone, sometime. For information on how to calculate the values for ProjectionCenterOffset, FOV etc. take a look at the Oculus SDK Documentation. - scottnyklHonored GuestLoosely related w.r.t the Perspective Projection Matrix computations within the Oculus SDK:
I've noticed the SDK in 0.2.5c doesn't compute the perspective projection matrix according to the canonical OpenGL Perspective Projection Matrix. When integrating the Oculus into my OpenGL engine, I had to scale the perspective matrix element at [row 2, column 3] by a factor of 2.0.
I've actually written a blog article about this at http://nykl.net/?page_id=175 . Hopefully someone will find it useful. Perhaps others have a justification for why the perspective projection matrix leaves out many of the "correct" computations. I assume that the Oculus developers would believe far - near ~= 0; however, this is not always a good assumption when rendering vast distances (for flight simulators, for example). A more detailed explanation is given at my blog.
A similar problem arose for Orthographic projections, but I haven't yet written my solution to that - although I have it working. - bluenoteExplorer
"scottnykl" wrote:
I've noticed the SDK in 0.2.5c doesn't compute the perspective projection matrix according to the canonical OpenGL Perspective Projection Matrix. When integrating the Oculus into my OpenGL engine, I had to scale the perspective matrix element at [row 2, column 3] by a factor of 2.0.
I've actually written a blog article about this at http://nykl.net/?page_id=175 . Hopefully someone will find it useful. Perhaps others have a justification for why the perspective projection matrix leaves out many of the "correct" computations. I assume that the Oculus developers would believe far - near ~= 0; however, this is not always a good assumption when rendering vast distances (for flight simulators, for example). A more detailed explanation is given at my blog.
Good point, I was confused by this as well. However, I did change back and forth from a factor 1.0 (as in Oculus documentation) and factor 2.0 (as in the OpenGL documentation) and I can't really tell what is right or wrong :-/. To me, a factor of 2.0 just seems to increase the world scale, but I simply can't say what is the right scale... So far, I did not find the time to look into the maths to check what is the formal correct solution. Oh and btw, following your argumentation, matrix element M[2,2] is also "wrong", right? The Oculus documentation only contains zFar in the numerator, whereas OpenGL says (zFar-zNear). Did you change this matrix element accordingly? Ok, since zNear << zFar this should be almost negligible... An explanation from the Oculus devs on this would be great. - jhericoAdventurer
"bluenote" wrote:
Good point, I was confused by this as well.
It's pretty trivial to just construct your own projection matrix instead of relying on the Oculus SDK computation:
glm::uvec2 eyeSize(ovrHmdInfo.HResolution / 2, ovrHmdInfo.VResolution);
OVR::Util::Render::StereoConfig ovrStereoConfig;
ovrStereoConfig.SetHMDInfo(ovrHmdInfo);
gl::Stacks::projection().top() =
glm::perspective(ovrStereoConfig.GetYFOVRadians(),
aspect(eyeSize), 0.01f, 1000.0f);
It also means you have direct control over the near and far clipping planes, as opposed to the projection matrix you get out of the SDK. - bluenoteExplorer
"jherico" wrote:
"bluenote" wrote:
Good point, I was confused by this as well.
It's pretty trivial to just construct your own projection matrix instead of relying on the Oculus SDK computation:
I know, and that is exactly what I already do (I'm developing in Scala so I rewrote these parts of GLM). The essential question is: Why differs Equation 5 in the SDK documentation from this equation (and pretty all other references on projection matrices). They have the same goal: converting the camera frustum to clip coordinates, so there should not be a difference. So either Equation 5 is wrong as suggested by scottnykl, or there is a reason why the equation must differ from a canonical projection matrix when using the Rift? - glenfExplorerHi all, I just discovered this same problem, where the frustum creation is different in the SDK (in particular, in CreateProjection() in OVR_Stereo.cpp), compared to the usual OpenGL (and GLM) calculation.
If you "do the math", you'll find that the Oculus SDK way is like DirectX, where Normalized Device Coordinates range from 0 (near) to 1 (far) in Z. In OpenGL, the NDC range is -1 (near) to 1 (far). That's the only difference between these two calculations:
projection.M[2][2] = -handednessScale * zFar / (zNear - zFar);
projection.M[2][3] = (zFar * zNear) / (zNear - zFar);
vs:
// "Corrected" code (for OpenGL, not DirectX!)
projection.M[2][2] = -handednessScale * (zFar + zNear) / (zNear - zFar);
projection.M[2][3] = (2.0 * zFar * zNear) / (zNear - zFar);
If you use these two projection matrices to project a homogeneous point (0, 0, -zNear, 1) (then divide by w), in the first case you'll get a projected z coordinate of 0; in the second case, -1. Projecting a point at (0, 0, -zFar, 1) in both cases results in a z NDC of +1, as expected. What does this mean, in practical terms? Not sure -- I guess it means you're "wasting" some of the depth buffer resolution for very near objects?
Glen.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 12 years ago
- 3 months agoAnonymous