cancel
Showing results for 
Search instead for 
Did you mean: 

OpenGL : retrieving the framebuffer of an attached window

jfeh
Honored Guest
Hi,

As I mentioned in the midst of another topic, I would like to know if and how it is possible to retrieve the framebuffer's content currently displayed in the Rift at every frame (e.g. using glReadPixels, glCopyTexImage2D etc.)?
SDK is 0.4.4 in OpenGL mode under Windows using the DirectToRift mode ("ovrHmd_AttachToWindow") with client distortion rendering. Basically, I would like to copy the Rift's screen content to the GPU's VRAM (or even better, get the existing OpenGL texture ID if the screen has been filled by texturing a quad in OpenGL).

For details, see below (it is a partial copy paste from the topic "OpenGL context management VS Oculus ")

Could you please elaborate on the latter point ?
Is it a global side effect (i.e. in a process that has called "ovr_InitializeRenderingShim" at its start-up, will all the OpenGL context further created won't have their default framebuffer referenced by the index 0 regardless of their window being attached or not to the HMD) ?

I am having an issue when trying to bind the default framebuffer on the OpenGL context that serves my main window (this window being attached to the Rift via "ovrHmd_AttachToWindow").
Up till the first call to "ovrHmd_GetEyePoses " or "ovrHmd_EndFrame", I can successfully rebind my default framebuffer. However, as soon as one of the 2 previous functions have been called, calling glBindFramebuffer(GL_FRAMEBUFFER, 0) returns with no error, but when querying the current framebuffer with "glGetIntegerv(GL_READ_FRAMEBUFFER_BINDING, &fbo)" I get constantly a result of "2". What made me realize the latter was a call to glReadBuffer(GL_FRONT) (same for GL_BACK) that failed with a GL_INVALID_OPERATION as any framebuffer of non zero index is an FBO, and thus only COLOR_ATTACHMENT_$i are valid queries.
It is as if the OpenGL context had "lost" its default framebuffers at runtime (note : as Nsight can not be launched when libOVR is initialized, I cannot truly analyze the situation, any suggestions are welcome for proper OpenGL profiling with the Rift).

If you do have an explanation, I would be really grateful as understanding this point is of great importance regarding the integration of the DK2 in our software.

As to why am trying to do so ? I realized that when attaching the window to the HMD via "ovrHmd_AttachToWindow", one has to configure rendering with config.OGL.Header.BackBufferSize set to the window size and not the optimal Rift resolution (1920*1080) :
- Setting the Rift to optimal resolution (BackBufferSize = {1920,1080}) and attaching it to a smaller window clips/truncates the 2 distorted views to the window's viewport (i.e. no stereoscopy i.e. rubbish in HMD). So obviously, no minification is done inside libOVR.
- If the window is smaller than the Rift's optimal resolution but the Rift is configured in accordance (BackBufferSize = {window.w,window.h}), stereoscopy operates but rendering quality degrades (screen space undersampling).

So in order to benefit from full HMD resolution and thus optimal rendering quality without being constrained to show on screen a window of 1920*1080, I tried to render to the Rift via a dedicated hidden window which will always be in the optimal resolution (hmd->Resolution).
If mirroring of the 2 views is required by the application (because it rocks), one can copy to texture the Rift's framebuffer for further texture mapping the full viewport of another -visible- window (e.g. an OpenGL widget of 800*600 inside a full UI).
This should be possible as when tracing "ovrHmd_ConfigureRendering" and "ovrHmd_EndFrame" we can see that an OpenGL context is created by libOVR for distortion rendering, this context sharing the data with our original context passed inside the ovrGLConfig structure.
However, as we can not trace "ovrHmd_AttachToWindow" (socket serialization towards the service), we can not know how the HWND of the attached window is used. Does it have to have an OpenGL context ? Or for example will the service set an adequate PFD format and create a shared context with the distortion context (for accessing the distorted view internal textures) ? Is the content of the attached window copied at each frame to the native framebuffer of the Rift (this would explain the previous undersampling/truncation)? Or is the attached window "physically" bound to the hardware (i.e. no copy) ?
Any information regarding the internals of the "ovrHmd_AttachToWindow" that would help proper SDK use would be greatly appreciated.


Oculus Team, any explanations about the SDK and the runtime service internals would truely be appreciated, or even a proposition as to how to achieve the same goal with another technical solution.
Jeff.
11 REPLIES 11

lamour42
Expert Protege
Hi,

I am a DirectX programmer, so I don't know what is different in OpenGL mode. But in my code window size has nothing to do at all with the rift: in client distortion rendering with direct mode you have full control of back buffers - especially their size. Also I have full control of content rendered to the rift, which is the whole point of doing client rendering. The desktop window is just a regular window. I can give it any size, even fullscreen works. In DX11 the connection from the drawing pipeline to the window is made during Direct3D device and swap chain creation. From that point on the content of the window is filled with every flip of the swap chain (Present() call in DX11). This is no difference from Rift rendering to regular DX11 rendering without rift.

jherico
Adventurer
"lamour42" wrote:
I am a DirectX programmer, so I don't know what is different in OpenGL mode. But in my code window size has nothing to do at all with the rift: in client distortion rendering with direct mode you have full control of back buffers - especially their size.


OpenGL is indeed different. At least on my nVidia system, the resolution of the Rift is directly tied to the size of the on-screen window you're attached to. You can make the window small, but it negatively impacts the Rift image, because it's ending up stretching however many pixels are in your window over the whole of the Rift screen.

jherico
Adventurer
"jfeh" wrote:
As I mentioned in the midst of another topic, I would like to know if and how it is possible to retrieve the framebuffer's content currently displayed in the Rift at every frame (e.g. using glReadPixels, glCopyTexImage2D etc.)?
SDK is 0.4.4 in OpenGL mode under Windows using the DirectToRift mode ("ovrHmd_AttachToWindow") with client distortion rendering. Basically, I would like to copy the Rift's screen content to the GPU's VRAM (or even better, get the existing OpenGL texture ID if the screen has been filled by texturing a quad in OpenGL).


I'm not clear on what the problem would be here. To distort the image you have to start with a texture that's been rendered. If you want the contents pre-distortion then you already have the handle of the texture you want to read. If you want the post-distortion pixels, then just attach an FBO, render the distortion mesh, and then you have the results in the FBO attached texture. You can then take that texture and render it to a full screen quad on the Rift. This will of course break timewarp. Alternatively, you could render the distortion mesh, and then render it again with timewarp support or use SDK side distortion. The incremental cost of rendering distortion twice is going to be trivial compared to the cost of the rest of the rendering pipeline.

lamour42
Expert Protege
"jherico" wrote:
"lamour42" wrote:
I am a DirectX programmer, so I don't know what is different in OpenGL mode. But in my code window size has nothing to do at all with the rift: in client distortion rendering with direct mode you have full control of back buffers - especially their size.


OpenGL is indeed different. At least on my nVidia system, the resolution of the Rift is directly tied to the size of the on-screen window you're attached to. You can make the window small, but it negatively impacts the Rift image, because it's ending up stretching however many pixels are in your window over the whole of the Rift screen.


Wow, what a serious drawback of OpenGL mode compared to DirectX. I would even consider it a bug, because neither dev guide nor header files mention this. So you render into a texture of arbitrary size but then get crippled when this is copied to the rift? Strange indeed. What exactly happens if you call ovrHmd_ConfigureRendering() with cfg.OGL.Header.BackBufferSize set to your render texture size? Does it give any errors?

jfeh
Honored Guest
Thanks jherico for confirming my point. It is indeed straightforward to reproduce with a simple test program that renders to optimal texture sizes (returned by ovrHmd_GetFovTextureSize when giving it the optimal FOV stored in location m_hmd->DefaultEyeFov as well as giving it a value of 1.0 for the pixelsPerDisplayPixel) bu with different windows sizes (at very low resolutions, it is kind of a 80's revival for CG :-)).

There is a point that is still not clear in my opinion : when attaching a window to the hmd, is it the content of the window's backbuffer that is copied to the Rift's screen, or vice versa ?
- The SDK states for "ovrHmd_AttachToWindow" :
Platform specific function to specify the application window whose output will be displayed on the HMD
. This is consistent with what jherico and I observed (quality degrades as window size diminishes).
- However, when looking at the documentation of the flag "ovrHmdCap_NoMirrorToWindow" , it states that
Disables mirroring of HMD output to the window. This may improve rendering performance slightly (only if 'ExtendDesktop' is off).
And this implies that rendering is done internally in the Rift and may be copied to the destination window. Unless this only means that a swapBuffer isn't performed on the window's device context.

Those 2 statements are in contradiction in my opinion, any one from the technical team at Oculus could shed some light on that point ?

@jherico : given the latest version of the SDK in OpenGL, would you have any recommendation in order to render in the Rift at optimal quality (i.e. backbuffer of 1920*1080), but with mirroring of the 2 distorted views occurring in a window of lesser size (mirroring is mandatory in my case). This latter window would typically be an OpenGL view (e.g. 800*600) encapsulated in a full UI. Without access to the framebuffer of the attached window (which I originally planed to hide), this does not seem feasible with a Direct to Rift approach. Would extended mode be a viable option in your opinion by the way ?

Thanks,
Jeff.

jherico
Adventurer
"jfeh" wrote:
Thanks jherico for confirming my point. It is indeed straightforward to reproduce with a simple test program that renders to optimal texture sizes (returned by ovrHmd_GetFovTextureSize when giving it the optimal FOV stored in location m_hmd->DefaultEyeFov as well as giving it a value of 1.0 for the pixelsPerDisplayPixel) bu with different windows sizes (at very low resolutions, it is kind of a 80's revival for CG :-)).

There is a point that is still not clear in my opinion : when attaching a window to the hmd, is it the content of the window's backbuffer that is copied to the Rift's screen, or vice versa ?


With Direct3D, the SDK behaves as documented. The on-screen window isn't critical to the display, doesn't need to be visible, and it's size does not impact the Rift quality. With OpenGL, the onscreen window must be the same size as the Rift display.

My assumption is that because of the piss-poor way Windows treats OpenGL (supporting only the 1.1 API) that much more of the OpenGL framework ends up in the video driver, compared to Direct3D. With Direct3D, Oculus probably has more opportunity to intercept images at different points in the pipeline, thus negating the need to have an onscreen window the same size as the Rift display or even visible.

With OpenGL, Oculus is more limited in terms of when they can grab the screen image and transfer it to the display, leading to the limitations we see.

lamour42
Expert Protege
"jherico" wrote:

With OpenGL, Oculus is more limited in terms of when they can grab the screen image and transfer it to the display, leading to the limitations we see.


That is certainly true for SDK rendered mode. But for client rendering there is no 'screen grabbing', right? The original post was about client rendering.

jherico
Adventurer
"lamour42" wrote:
"jherico" wrote:

With OpenGL, Oculus is more limited in terms of when they can grab the screen image and transfer it to the display, leading to the limitations we see.


That is certainly true for SDK rendered mode. But for client rendering there is no 'screen grabbing', right? The original post was about client rendering.


No, the issue of getting the pixels from GPU memory to the Rift display is completely orthogonal to whether you're doing SDK distortion or client distortion (I dislike using the term 'client rendering' since all apps do rendering of some sort on the client).

Essentially, when you call ovrHmd_AttachToWindow you're causing the Oculus runtime (not the SDK) to do something that changes the way the swap buffers call works, forcing it to the sync to the Rift refresh rate, rather than the refresh rate of the screen on which the proxy window is visible, and causing the pixels to appear there (as well as on the proxy window, though possibly only after additional latency). This is the case whether you're calling swap buffers yourself (client distortion) or allow the SDK to do so (SDK distortion).

lamour42
Expert Protege
Hi,

thanks for your explanation. You are right, I read much more into SDK rendering as opposed to client distortion rendering as there actually is.

What I now do understand even less is why it should not work to set cfg.OGL.Header.BackBufferSize to back buffer size instead of window size. What the Oculus runtime does should be exactly the same - regardless of if the content was drawn by OpenGL or DirectX, right? And this definitely works in DirectX.

Another question: Is there some place where you discuss the pros and cons of the hybrid OpenGL/DirectX approach of your (excellent!) example https://github.com/OculusRiftInAction/OculusRiftInAction/blob/master/examples/cpp/experimental/Examp...? This might even get an old DirectX fan like myself get interested in trying out OpenGL. 😛