Forum Discussion
GoodwinGames
9 years agoExplorer
Single undistorted view mirroring
Hi everyone, I've been silently lurking these forums since the beginning but this is my first topic.
I'm developing an OpenGL game engine for a project at university and I have a basic graphics engine with Oculus support with the latest (0.8) SDK. Mirroring works fine but I really want to mirror a single undistorted view to the monitor instead of the standard stereo view, I know I saw either a forum post or an article a few months back demonstrating exactly how to do this but I can't for the life of me find it now. Any help would be appreciated.
I'm developing an OpenGL game engine for a project at university and I have a basic graphics engine with Oculus support with the latest (0.8) SDK. Mirroring works fine but I really want to mirror a single undistorted view to the monitor instead of the standard stereo view, I know I saw either a forum post or an article a few months back demonstrating exactly how to do this but I can't for the life of me find it now. Any help would be appreciated.
16 Replies
- MecrofExplorerDid you try to manually draw the framebuffer's color texture (where you render your scene) in your window (as a 2D full-screen plane)?
- If you're using two buffer textures per view, just render one of them in your window;
- If you're using only one buffer texture for both view (side by side), you can draw the texture in your window with theses horizontal texture coordinates : [0;0.5] instead of [0;1]
Which library are you using ? SDL, GLUT, etc ? - GoodwinGamesExplorerThanks for the reply, I am using GLFW for the windowing system and GLLoadGen for OpenGL, using version 4.3 with no extensions.
At the moment I am just using BlitFrameBuffer to render out to the window (my oculus code at the moment is quite close to the sample project code) but I will try tonight to render it out to a plane and change the coordinates. Seems like a relatively simple and cheap way of solving the problem. I do have separate depth and texture buffers for each eye however with the oculus code I'm not sure how I would render just one of these to the mirror FBO ( sorry if that is stupid I've only been doing modern OpenGL for the past 4 months and I'm still learning ).
Here Is the code I use for setting up the mirror texture when I create the window:
ovrResult result = ovr_CreateMirrorTextureGL(CameraVR::GetHMD(), gl::SRGB8_ALPHA8, windowSize.w, windowSize.h, reinterpret_cast<ovrTexture**>(&m_MirrorTexture));
if (!OVR_SUCCESS(result))
{
std::cout << "Failed to create mirror texture." << std::endl;
return false;
}
gl::GenFramebuffers(1, &m_MirrorFBO);
gl::BindFramebuffer(gl::READ_FRAMEBUFFER, m_MirrorFBO);
gl::FramebufferTexture2D(gl::READ_FRAMEBUFFER, gl::COLOR_ATTACHMENT0, gl::TEXTURE_2D, m_MirrorTexture->OGL.TexId, 0);
gl::FramebufferRenderbuffer(gl::READ_FRAMEBUFFER, gl::DEPTH_ATTACHMENT, gl::RENDERBUFFER, 0);
gl::BindFramebuffer(gl::READ_FRAMEBUFFER, 0);
And this is the code I call in the window to render to the oculus and mirror to the screen at the end of my render loop:
ovrViewScaleDesc viewScaleDesc = Greenhorn::Graphics::CameraVR::GetOculusViewScaleDesc();
ovrLayerEyeFov ld = Greenhorn::Graphics::CameraVR::GetOculusLayerEyeFov();
ovrLayerHeader* layers = &ld.Header;
ovrResult result = ovr_SubmitFrame(CameraVR::GetHMD(), 0, &viewScaleDesc, &layers, 1);
gl::BindFramebuffer(gl::READ_FRAMEBUFFER, m_MirrorFBO);
gl::BindFramebuffer(gl::DRAW_FRAMEBUFFER, 0);
GLint w = m_MirrorTexture->OGL.Header.TextureSize.w;
GLint h = m_MirrorTexture->OGL.Header.TextureSize.h;
gl::BlitFramebuffer(0, h, w, 0, 0, 0, w, h, gl::COLOR_BUFFER_BIT, gl::NEAREST);
gl::BindFramebuffer(gl::READ_FRAMEBUFFER, 0);
glfwSwapBuffers(m_Window);
glfwPollEvents();
Also is there a way to capture the frame before it gets distorted by the SDK, so I can get an undistorted image without re-rendering the scene? I've seen this done in some games but maybe they are applying inverse distortion to the final image? - GoodwinGamesExplorerI'm such an idiot, using BlitFrame and halving the width works well to render mono to the monitor. Still less efficient than binding one texture buffer to the mirror fbo I would guess?
- kondrakProtegeI was loooking for exactly the same thing as you several months ago and seeing that nobody posted a solution to it anywhere I wrote my own code:
https://github.com/kondrak/oculusvr_sam ... irrorModes
This demo program cycles through the standard stereo mode, distorted single eye view, non-distorted stereo and non-distorted single view.
I'm not sure there's a way to capture the non-distorted image, so re-rendering seems to be necessary. I'd be keen to know myself if it's possible to apply inverse distortion somehow - this would greatly reduce performance cost compared to the current solution. - MecrofExplorer@kondrak: I didn't see you code yet, but isn't it possible to just render the Oculus' swap texture as a normal flat texture in the window ? Something like this:
// Bind the frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, fboID);
// Set its color layer 0 as the current swap texture
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex->OGL.TexId, 0);
// Set its depth layer as our depth buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthBuffID, 0);
// [
// for each eye, render each view code
// ]
// Set the window buffer as the current framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// method that draws a full screen quad in the window based on an OpenGL texture id
drawFullScreenQuad(tex->OGL.TexId);
// ... some code
// Then submitframe to the oculus
result = ovr_SubmitFrame(hmd, 0, &viewScaleDesc, &layers, 1);
I've nerver tried this and I do not have time to test now. :? - kondrakProtegeDoing that was the first thing I tried back then, but this only produced the already distorted image for me (that was 2 SDKs ago, I haven't seen anything in the changelog that would indicate this behavior changed).
The way I see it, once the swap buffers are bound and the scene rendered the image in both eye textures is already distorted, hence no way to intercept the original image. - MecrofExplorerOk, good to know, thanks :)
So the only way is to render the scene in a OpenGL texture that you render later in the Oculus' framebuffer and then in the window's buffer? So 3 draw calls instead of "2". - kondrakProtegeUnless someone proves me wrong: yes, that seems to be the case. I suspect this is why the concept of a mirror texture was created, so that the user doesn't have to manually perform any extra renders to get output in a desktop window.
Worst case scenario would be having to perform *4* renders - twice to get the output in OVR and twice to get non-distorted stereo image (if you want to account for different projection and view matrices per eye - this in fact helped me discover several stereo glitches in a couple of projects which I could not see with distorted mirror texture). 3 renders would suffice to get a single-eye non-distorted view, which is what we want here. - MecrofExplorerWell, for debug purpose, this is not really an issue. We can use a small window instead of an 1080p window.
Maybe it's possible to render in the window at 60fps instead of 75fps (means do not draw all frames in the window). But this is ugly :lol: - kondrakProtegeWith some trickery you can actually cap it to 60FPS but that won't do you any good if you want to have smooth experience in the googles. I did in fact recently record OVR output at 30FPS (Fraps will do it automatically for you) but it was quite nauseating.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 3 years ago
- 1 month ago
- 3 months ago
- 1 month ago
- 6 months ago