cancel
Showing results for 
Search instead for 
Did you mean: 

Basic Stereo Separation Problem (Custom C++/OpenGL 3.3)

inDigiNeous
Honored Guest
Hello!

I am in the process of developing a recursive geometry generator with C++ and OpenGL. (a kinda port of http://GeoKone.NET).

Pretty new to OpenGL programming and Oculus Rift, so any help would be appreciated!
I have a custom C++ project using GLFW, GLM and modern OpenGL 3.3.

I have got stereo rendering working, so that I render the scene to a Texture, setting the glViewPort and different View Matrixes for each eye, getting the adjustment values from the Oculus SDK StereoConfig class, then rendering this texture as a fullscreen quad on the screen.

Here is what I have currently displayed:



Now, this has some proper stereo separation applied, If I look at this cross-eyed I can see stereo effect there, and when it is moving, it looks like the geometry is going forward and backwards on the Z-plane.

But, when I look at this fullscreen with the Oculus DK1, the stereo separation is clearly off, both geometries that should be on the center are too much offset to the right and left, and the geometries on the center do not match, or neither does the grid.

Here is the code I use to render:

setupCamera function, should set up the viewport and View & Projection matrixes

void GeokoneController::setupCamera(OVR::Util::Render::StereoEye eye) {
OVR::Util::Render::StereoConfig *stereoConfig = _oculus->getStereoConfig();
OVR::Util::Render::StereoEyeParams params = stereoConfig->GetEyeRenderParams(eye);

// Set viewport to eye
glViewport(params.VP.x, params.VP.y, params.VP.w, params.VP.h);

// Get the projection center offset,
float projCenterOffset = stereoConfig->GetProjectionCenterOffset();
float ipd = stereoConfig->GetIPD();
float yfov = stereoConfig->GetYFOVRadians();

// The matrixes for offsetting the projection and view
// for each eye, to achieve stereo effect
glm::vec3 projectionOffset(-projCenterOffset / 2.0f, 0, 0);
glm::vec3 viewOffset = glm::vec3(-ipd / 2.0f, 0, 0);
// Negate the left eye versions
if (eye == OVR::Util::Render::StereoEye_Left) {
viewOffset *= -1.0f;
projectionOffset *= -1.0f;
}

// Adjust the view and projection matrixes
// View matrix is translated based on the eye offset
_view.top() = _view.top() * glm::translate(glm::mat4(), viewOffset);

// Projection matrix is calculated based on the fov and viewportAspectRatio
_projection.top() = glm::perspective(yfov, _viewportAspectRatio, PSIOculus::ZNEAR, PSIOculus::ZFAR);

// If we are distorting the views, need to adjust projection with the offset also
if (_renderMode == RENDER_MODE_STEREO_DISTORT) {
_projection.top() = _projection.top() * glm::translate(glm::mat4(), projectionOffset);
}
}


The Scene Drawing:

void GeokoneController::drawScene() {
PolyForm *poly;

glm::mat4 view_translation;
glm::vec3 translation = glm::vec3(0.0f);

translation.z = -1.0f;

// Translate
view_translation = glm::translate(glm::mat4(1.0f), translation);

// Then around Z axis to compose the view matrix
_view.top() = _view.top() * view_translation;

// Draw the horizon grid
drawGrid();

// Draw all the polyforms
// Render to the framebuffer texture
// This should render to the first attachment .. that is the
// texture or the color buffer
glUseProgram(_poly_prog);
for (int i=0; i<_container->getNumPolyForms(); i++) {
poly = _container->getPoly(i);

glBindVertexArray(poly->getVao());
drawPolyRecursion(poly);
glBindVertexArray(0);
}
}


The main Rendering Loop:

// Render to our framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer_id);

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Draw left viewpoint
_projection.push(_projection.top());
_view.push(_view.top());
setupCamera(OVR::Util::Render::StereoEye_Left);
drawScene();
_view.pop();
_projection.pop();

// Draw right viewpoint
_projection.push(_projection.top());
_view.push(_view.top());
setupCamera(OVR::Util::Render::StereoEye_Right);
drawScene();
_view.pop();
_projection.pop();

// Render to the screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glViewport(0, 0, window_width, window_height); // Render on the whole framebuffer, complete from the lower left corner to the upper right

// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Render the buffer texture
glUseProgram(_fs_quad_prog);

// Draw the fullscreen quad containing the texture
glBindVertexArray(_fs_quad_vao);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);

_video->flip();


Now, I haven't set up any distortion or roll/pitch/yaw support yet, I would like to just get the basic stereo rendering working first. I don't understand why this doesn't work, the View Matrix calculation should be correct in setupCamera method ?

As I understand the View Matrix just needs to be adjusted with the value of IPD / 2.0, that is correct right ?
And the Projection Matrix only if Stereo Distortion is used, which I am not doing yet.

So how come the separation is incorrect ? What am I missing here ? Could it be the roll/pitch/yaw ?
I have been looking at sample code from: http://rifty-business.blogspot.fi/2013/09/a-complete-cross-platform-oculus-rift.html (Great resource!) and trying to figure it out, but I just dont get it.
14 REPLIES 14

LiamDM
Explorer
I believe it MAY (not 100% sure, as I haven't done much experimenting with code similar to yours) be something to do with glm::translate you are using, and how it parses matrices. I believe it may be a problem with vectors and vertexes and the differences between the two, and how you are trying to use vertices in your matrix instead of vectors, try and check this out:
http://stackoverflow.com/questions/18106287/glm-translate-matrix-does-not-translate-the-vector

If not, it will be because you need to integrate shaders into your code, you can see a problem caused by this here:
https://groups.google.com/forum/#!topic/coin3d-discuss/WH-pqpFahNQ

If all else fails, I recommend you try again with new sample code! Goodluck!
Again, this may not be the problem, but always good to know some common "pitfalls" people find themselves in when trying to code for the Oculus Rift.

Also, check this out, it is by the same guy you have the sample code from, it may give you some ideas:
https://github.com/OculusRiftInAction/OculusRiftInAction/blob/master/source/Example_4_4_Display2dSte...
https://github.com/OculusRiftInAction/OculusRiftInAction/tree/master/source
Liam DM Software Engineer
LiamDM@SHELL ~ $ sudo apt-get install GoodCodingPractice 
 ERROR: This package is not compatible with your system

inDigiNeous
Honored Guest
"LiamDM" wrote:
I believe it MAY (not 100% sure, as I haven't done much experimenting with code similar to yours) be something to do with glm::translate you are using, and how it parses matrices. I believe it may be a problem with vectors and vertexes and the differences between the two, and how you are trying to use vertices in your matrix instead of vectors, try and check this out:
http://stackoverflow.com/questions/18106287/glm-translate-matrix-does-not-translate-the-vector

Thank you for the links and suggestions LiamDM, I will try these out! I'll write later if I find the solution.
Probably will have to start from a clean slate and do a new test project to see what is going on exactly.

inDigiNeous
Honored Guest
I have double checked though that the glm::translate actually translates the views, but maybe there is something subtle I am missing, maybe it's translating with some offset of error or something else.


_view.top() left
-----------------------------
1 0 0 0.032
0 1 0 0
0 0 1 0
0 0 0 1
_view.top() right
-----------------------------
1 0 0 -0.032
0 1 0 0
0 0 1 0
0 0 0 1

jherico
Adventurer
"inDigiNeous" wrote:

// The matrixes for offsetting the projection and view
// for each eye, to achieve stereo effect
glm::vec3 projectionOffset(-projCenterOffset / 2.0f, 0, 0);



The projectionCenterOffset should not be divided by 2. The IPD is the 'distance between the eyes' so each eye gets half of it to properly set up stereo separation. However, the projection center offset is the distance between the viewport center and the lens axis in noramlized device coordinates. There's no reason to divide it. It should have a value of ~0.15.

Sorry for misleading you. I guess I'll have to go and fix my blog post.
Brad Davis - Developer for High Fidelity Co-author of Oculus Rift in Action

inDigiNeous
Honored Guest
"jherico" wrote:

The projectionCenterOffset should not be divided by 2. The IPD is the 'distance between the eyes' so each eye gets half of it to properly set up stereo separation. However, the projection center offset is the distance between the viewport center and the lens axis in noramlized device coordinates. There's no reason to divide it. It should have a value of ~0.15.

Sorry for misleading you. I guess I'll have to go and fix my blog post.


Thanks for answering Jericho, you have some great examples, they have helped me a lot!

But in your examples, it also has eg. in Example_2_4_HelloRift.cpp:


// The projection offset and lens offset are both in normalized
// device coordinates, i.e. [-1, 1] on both the X and Y axis
glm::vec3 projectionOffsetVector =
glm::vec3(ovrStereoConfig.GetProjectionCenterOffset() / 2.0f, 0, 0);
eyeArgs[LEFT].projectionOffset =
glm::translate(glm::mat4(), projectionOffsetVector);
eyeArgs[RIGHT].projectionOffset =
glm::translate(glm::mat4(), -projectionOffsetVector);


The GetProjectionCenterOffset() is divided by 2.0f, isn't this correct ?

rupy
Honored Guest
the snowflakes should align in the lenses, you can directly see that wont work looking at your image which is symmetrical



It's interesting there are no official pixel metrics for the lenscap plastic from oculus. I render two 720x720 sqares offset by 40 from top to center vertically and in the middle they are scissored to not overlap, so cropped 80 pixels on each eye from the center. Dunno if this is "correct" but it works!

This means 64mm is 560 pixels if I calculate correctly, can anyone official from Oculus confirm?

Hm, what is the DPI of the DK1?

Also are you supposed to move the center of the eyes if the IPD is different than 6,4 cm? It's silly, so many fundamental questions unanswered for almost a year?!

Edit, looking closer at this I think it's more 740x740 with a 100 pixels crop... why can't Oculus answer this?!
"It's like Homeworld in first person." Disable Aero and vSync for a completely simulator sickness free experience with 2xHz FPS. Keep the config utility open for tracking to work.

jherico
Adventurer
"inDigiNeous" wrote:
"jherico" wrote:

The projectionCenterOffset should not be divided by 2. The IPD is the 'distance between the eyes' so each eye gets half of it to properly set up stereo separation. However, the projection center offset is the distance between the viewport center and the lens axis in noramlized device coordinates. There's no reason to divide it. It should have a value of ~0.15.

Sorry for misleading you. I guess I'll have to go and fix my blog post.


Thanks for answering Jericho, you have some great examples, they have helped me a lot!

But in your examples, it also has eg. in Example_2_4_HelloRift.cpp:


// The projection offset and lens offset are both in normalized
// device coordinates, i.e. [-1, 1] on both the X and Y axis
glm::vec3 projectionOffsetVector =
glm::vec3(ovrStereoConfig.GetProjectionCenterOffset() / 2.0f, 0, 0);
eyeArgs[LEFT].projectionOffset =
glm::translate(glm::mat4(), projectionOffsetVector);
eyeArgs[RIGHT].projectionOffset =
glm::translate(glm::mat4(), -projectionOffsetVector);


The GetProjectionCenterOffset() is divided by 2.0f, isn't this correct ?


Well, that's (even more) embarrassing. I'll have to review my code, but I'm fairly certain that the projection offset isn't supposed to be divided by 2. My common 'RiftApp' class doesn't do so, nor do a number of my other examples. Of course the new SDK makes all this academic by providing a different mechanism for getting a complete projection matrix out of the SDK for a given eye (defined in terms of an FovPort).

I'll go back through the examples when I get home this evening.
Brad Davis - Developer for High Fidelity Co-author of Oculus Rift in Action

jherico
Adventurer
"rupy" wrote:
Also are you supposed to move the center of the eyes if the IPD is different than 6,4 cm? It's silly, so many fundamental questions unanswered for almost a year?!


If by 'move the center of the eyes' you're referring to the projection matrix translation, then yes. The projection transformation is completely independent from the IPD. It's designed to ensure that the center of the scene lies under the actual lens axis. Because the lenses produce collimated light, the IPD has little to no impact on the perceived image, other than a little bit of distortion at the edges.
Brad Davis - Developer for High Fidelity Co-author of Oculus Rift in Action

inDigiNeous
Honored Guest
"rupy" wrote:
the snowflakes should align in the lenses, you can directly see that wont work looking at your image which is symmetrical

Thanks for the clarification rupy, I also noticed it today while going through the example code from https://github.com/OculusRiftInAction/OculusRiftInAction.git.

Looking at the examples from jherico, I see the Example_4_3_1_Display2DAspectCorrected produces the same output as I currently have, it seems (although my aspect ratios are different):



In Example_4_3_3_Display2dLensCorrected the stereo images are aligned towards the center so that they overlap correctly with the eyes and the image looks fine through the Rift.



So I am missing this offsetting the left and right sides towards the centre still, although the view matrix IPD and thus the scene rendering for both eyes are correct, they are just placed in the wrong position on the screen.

Does this sound correct ?
Still need help?

Did this answer your question? If it didn’t, use our search to find other topics or create your own and other members of the community will help out.

If you need an agent to help with your Meta device, please contact our store support team here.

Having trouble with a Facebook or Instagram account? The best place to go for help with those accounts is the Facebook Help Center or the Instagram Help Center. This community can't help with those accounts.

Check out some popular posts here:

Getting Help from the Meta Quest Community

Tips and Tricks: Charging your Meta Quest Headset

Tips and Tricks: Help with Pairing your Meta Quest

Trouble With Facebook/Instagram Accounts?