cancel
Showing results for 
Search instead for 
Did you mean: 

Basic Stereo Separation Problem (Custom C++/OpenGL 3.3)

inDigiNeous
Honored Guest
Hello!

I am in the process of developing a recursive geometry generator with C++ and OpenGL. (a kinda port of http://GeoKone.NET).

Pretty new to OpenGL programming and Oculus Rift, so any help would be appreciated!
I have a custom C++ project using GLFW, GLM and modern OpenGL 3.3.

I have got stereo rendering working, so that I render the scene to a Texture, setting the glViewPort and different View Matrixes for each eye, getting the adjustment values from the Oculus SDK StereoConfig class, then rendering this texture as a fullscreen quad on the screen.

Here is what I have currently displayed:



Now, this has some proper stereo separation applied, If I look at this cross-eyed I can see stereo effect there, and when it is moving, it looks like the geometry is going forward and backwards on the Z-plane.

But, when I look at this fullscreen with the Oculus DK1, the stereo separation is clearly off, both geometries that should be on the center are too much offset to the right and left, and the geometries on the center do not match, or neither does the grid.

Here is the code I use to render:

setupCamera function, should set up the viewport and View & Projection matrixes

void GeokoneController::setupCamera(OVR::Util::Render::StereoEye eye) {
OVR::Util::Render::StereoConfig *stereoConfig = _oculus->getStereoConfig();
OVR::Util::Render::StereoEyeParams params = stereoConfig->GetEyeRenderParams(eye);

// Set viewport to eye
glViewport(params.VP.x, params.VP.y, params.VP.w, params.VP.h);

// Get the projection center offset,
float projCenterOffset = stereoConfig->GetProjectionCenterOffset();
float ipd = stereoConfig->GetIPD();
float yfov = stereoConfig->GetYFOVRadians();

// The matrixes for offsetting the projection and view
// for each eye, to achieve stereo effect
glm::vec3 projectionOffset(-projCenterOffset / 2.0f, 0, 0);
glm::vec3 viewOffset = glm::vec3(-ipd / 2.0f, 0, 0);
// Negate the left eye versions
if (eye == OVR::Util::Render::StereoEye_Left) {
viewOffset *= -1.0f;
projectionOffset *= -1.0f;
}

// Adjust the view and projection matrixes
// View matrix is translated based on the eye offset
_view.top() = _view.top() * glm::translate(glm::mat4(), viewOffset);

// Projection matrix is calculated based on the fov and viewportAspectRatio
_projection.top() = glm::perspective(yfov, _viewportAspectRatio, PSIOculus::ZNEAR, PSIOculus::ZFAR);

// If we are distorting the views, need to adjust projection with the offset also
if (_renderMode == RENDER_MODE_STEREO_DISTORT) {
_projection.top() = _projection.top() * glm::translate(glm::mat4(), projectionOffset);
}
}


The Scene Drawing:

void GeokoneController::drawScene() {
PolyForm *poly;

glm::mat4 view_translation;
glm::vec3 translation = glm::vec3(0.0f);

translation.z = -1.0f;

// Translate
view_translation = glm::translate(glm::mat4(1.0f), translation);

// Then around Z axis to compose the view matrix
_view.top() = _view.top() * view_translation;

// Draw the horizon grid
drawGrid();

// Draw all the polyforms
// Render to the framebuffer texture
// This should render to the first attachment .. that is the
// texture or the color buffer
glUseProgram(_poly_prog);
for (int i=0; i<_container->getNumPolyForms(); i++) {
poly = _container->getPoly(i);

glBindVertexArray(poly->getVao());
drawPolyRecursion(poly);
glBindVertexArray(0);
}
}


The main Rendering Loop:

// Render to our framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer_id);

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Draw left viewpoint
_projection.push(_projection.top());
_view.push(_view.top());
setupCamera(OVR::Util::Render::StereoEye_Left);
drawScene();
_view.pop();
_projection.pop();

// Draw right viewpoint
_projection.push(_projection.top());
_view.push(_view.top());
setupCamera(OVR::Util::Render::StereoEye_Right);
drawScene();
_view.pop();
_projection.pop();

// Render to the screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glViewport(0, 0, window_width, window_height); // Render on the whole framebuffer, complete from the lower left corner to the upper right

// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Render the buffer texture
glUseProgram(_fs_quad_prog);

// Draw the fullscreen quad containing the texture
glBindVertexArray(_fs_quad_vao);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);

_video->flip();


Now, I haven't set up any distortion or roll/pitch/yaw support yet, I would like to just get the basic stereo rendering working first. I don't understand why this doesn't work, the View Matrix calculation should be correct in setupCamera method ?

As I understand the View Matrix just needs to be adjusted with the value of IPD / 2.0, that is correct right ?
And the Projection Matrix only if Stereo Distortion is used, which I am not doing yet.

So how come the separation is incorrect ? What am I missing here ? Could it be the roll/pitch/yaw ?
I have been looking at sample code from: http://rifty-business.blogspot.fi/2013/09/a-complete-cross-platform-oculus-rift.html (Great resource!) and trying to figure it out, but I just dont get it.
14 REPLIES 14

rupy
Honored Guest
That's my guess, only one way to find out!
"It's like Homeworld in first person." Disable Aero and vSync for a completely simulator sickness free experience with 2xHz FPS. Keep the config utility open for tracking to work.

inDigiNeous
Honored Guest
Yay! Got it working.
What a feeling after learning C++, modern OpenGL, GLSL and everything related to that for the past 4 months to actually see proper stereoscopic rendering running inside the Rift 🙂

Thanks to jherico, rupy and LianDM for helping out! Really appreciate it, been banging my head against the wall with this for a while.

What I did was replace my setupCamera function with just basically the code from the Oculus SDK PDF -documentation that explains the stereo separation and use it:


void GeokoneController::setupCamera(OVR::Util::Render::StereoEye eye) {
OVR::HMDInfo *hmd = _oculus->getHMDInfo();
OVR::Util::Render::StereoConfig *stereoConfig = _oculus->getStereoConfig();
OVR::Util::Render::StereoEyeParams params = stereoConfig->GetEyeRenderParams(eye);

float aspectRatio = float(hmd->HResolution * 0.5f) / float(hmd->VResolution);
float halfScreenDistance = (hmd->VScreenSize / 2.0f);
float yfov = 2.0f * atan(halfScreenDistance / hmd->EyeToScreenDistance);

float viewCenter = hmd->HScreenSize * 0.25f;
float eyeProjectionShift = viewCenter - hmd->LensSeparationDistance * 0.5f;
float projectionCenterOffset = 4.0f * eyeProjectionShift / hmd->HScreenSize;
float halfIPD = hmd->InterpupillaryDistance / 2.0f;

glm::mat4 viewTranslation;
glm::mat4 projTranslation;

// Set viewport to eye
glViewport(params.VP.x, params.VP.y, params.VP.w, params.VP.h);

// The matrixes for offsetting the projection and view
// for each eye, to achieve stereo effect
glm::vec3 projectionOffset = glm::vec3(-projectionCenterOffset, 0.0f, 0.0f);
glm::vec3 viewOffset = glm::vec3(-halfIPD, 0.0f, 0.0f);

// Negate the left eye versions
if (eye == OVR::Util::Render::StereoEye_Left) {
projectionOffset *= -1.0f;
viewOffset *= -1.0f;
}

// Adjust the view and projection matrixes
// View matrix is translated based on the eye offset
glm::mat4 projCenter = glm::perspective(yfov, aspectRatio, PSIOculus::ZNEAR, PSIOculus::ZFAR);
projTranslation = glm::translate(glm::mat4(1.0f), projectionOffset);
_projection.top() = projTranslation * projCenter;

viewTranslation = glm::translate(glm::mat4(1.0f), viewOffset);
_view.top() = viewTranslation * _view.top();
}


And this worked.

The first time I read this code I had no idea what it was doing, after writing here and thinking about the process I now actually got it what is happening there and could understand the matrix transformations also.

I guess the example codes by jherico did something differently behind the classes that I didn't notice. Big thanks still to jherico, your examples and blog have been really helpful!

jherico
Adventurer
"inDigiNeous" wrote:

So I am missing this offsetting the left and right sides towards the centre still, although the view matrix IPD and thus the scene rendering for both eyes are correct, they are just placed in the wrong position on the screen.


Not quite. We've actually removed that chapter from the latest version of the book. The issue is that we were trying to get across the importance of the lens offset from the center of the (half of the) screen it was looking at. Unfortunately, doing so with 2D images and then doing it differently with 3D images once we got to that section of the book simply made the point of chapter 4 confusing.

If you look at the example code you'll see this:


float lensOffset = 1.0f - (2.0f * ovrHmdInfo.LensSeparationDistance / ovrHmdInfo.HScreenSize);

for_each_eye([&](StereoEye eye){
float eyeLensOffset = (eye == LEFT ? -lensOffset : lensOffset);
projections[eye] = glm::ortho(
-1.0f + eyeLensOffset, 1.0f + eyeLensOffset,
-1.0f / eyeAspect, 1.0f / eyeAspect);
});


First off, the calculation of lensOffset should be producing the same value you'd get from GetProjectionCenterOffset(). Second, the resulting projection matrix is essentially the same as if you took an identity matrix and translated it by eyeLensOffset on the X axis.

The important thing to understand is that there are two matrix manipulations, but they don't really have anything to do with one another.

The projection matrix manipulation has to do with correcting for the design of the headset (the lenses not being centered over their respective screen halves). The modelview matrix manipulation has to do with providing a (slightly) different point of view for each eye in order to get stereo effects. Seeing double in the Rift almost always means there's something wrong with the projection matrix setup. Messing up the modelview matrix would only cause you to see double if it was wrong by an orders of magnitude, and even then would only happen for items close enough to the user to provide parallax.

If you're seeing double in the Rift, you should fix it by doing NO per-eye modelview transformation, and making sure that you've got the projection matrix right. You won't get any depth in your scene, but that's fine, because until you get the projection matrix right, you won't be able to see depth anyway. Once you no longer see double when using the exact same modelview matrix on each eye, you can re-enable the per-eye modelview offset and verify you're getting the correct sensation of depth.

I've got a whole blog post on how to use a rendering of a small cube to verify that you've got both matrices correct. Might prove useful.
Brad Davis - Developer for High Fidelity Co-author of Oculus Rift in Action

jherico
Adventurer
"inDigiNeous" wrote:
Yay! Got it working.

congrats!

"inDigiNeous" wrote:

float projectionCenterOffset = 4.0f * eyeProjectionShift / hmd->HScreenSize;


You should find that the final value of projectionCenterOffset is ~0.15, and is the same as the value that you get from the OVR function that returns the projection center offset.
Brad Davis - Developer for High Fidelity Co-author of Oculus Rift in Action

inDigiNeous
Honored Guest
"jherico" wrote:
"inDigiNeous" wrote:
Yay! Got it working.

congrats!

"inDigiNeous" wrote:

float projectionCenterOffset = 4.0f * eyeProjectionShift / hmd->HScreenSize;


You should find that the final value of projectionCenterOffset is ~0.15, and is the same as the value that you get from the OVR function that returns the projection center offset.

Thank you for the clarification. I now simplified my code.
Still need help?

Did this answer your question? If it didn’t, use our search to find other topics or create your own and other members of the community will help out.

If you need an agent to help with your Meta device, please contact our store support team here.

Having trouble with a Facebook or Instagram account? The best place to go for help with those accounts is the Facebook Help Center or the Instagram Help Center. This community can't help with those accounts.

Check out some popular posts here:

Getting Help from the Meta Quest Community

Tips and Tricks: Charging your Meta Quest Headset

Tips and Tricks: Help with Pairing your Meta Quest

Trouble With Facebook/Instagram Accounts?