Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
jeyjey's avatar
jeyjey
Honored Guest
12 years ago

Minimal Example Crashes on Linux

Hey guys,

i'm new to OpenGL, im new to everything concerning game-programming. So please forgive my poor opengl code and knoweledge.

I only wanted to code a very small example using the OculusSDK that runs on both, windows and linux.
I'm using:
- OculusSDK 0.3.2
- glfw
- glew

It works on windows, but keeps crashing on linux, and i have no clue why. I tried to debug it with "gDEBugger", but this debugger crashes on startup with a segmentation fault (testet on 3 different machines running 14.04).

I forgot to mention that i get opengl errors on windows after each frame-call, coming directly from:
OVR::CAPI::GL::DistortionRenderer::GraphicsState::Restore - capi_gl_distortionrenderer.cpp, line 380
Error:
Reason: OpenGL Error
Breaked-on: glBindFramebufferEXT(GL_FRAMEBUFFER , 1)
Error-Code: GL_INVALID_OPERATION
Error-Description:
The specified operation is not allowed in the current state. The offending function is ignored, having no side effect other than to set the error flag.


Please, can anyone help me? I'm stuck since a week and very frustrated...



The Code:

#include <iostream>
#include <stdexcept>
#include <string.h>

#include <GL/glew.h>
#include <GLFW/glfw3.h>

#define OVR_OS_WIN32
#include <OVR_CAPI.h>
#include <OVR_CAPI_GL.h>


int main()
{
// ------------------------- initialize ovr-lib ------------------------------------

ovr_Initialize();

// initialized connected OculusRift
ovrHmd hmd = ovrHmd_Create(0);

// if no rift is connected fall back to virtual debug HMD;
if (!hmd)
{
fprintf(stderr, "failed to open Oculus HMD, falling back to virtual debug HMD\n");
if (!(hmd = ovrHmd_CreateDebug(ovrHmd_DK1)))
{
fprintf(stderr, "fatal error: failed to create virtual debug HMD - program exit\n");
return -1;
}
}

ovrHmdDesc _hmdDesc;
ovrHmd_GetDesc(hmd, &_hmdDesc);


// ------------------------- initialize GLFW ------------------------------------

if (!glfwInit())
throw std::runtime_error("glfwInit() failed!");

// set profile
glfwWindowHint(GLFW_SAMPLES, 4); // 4x antialiasing
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // We want OpenGL 3.3
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // To make MacOS happy; should not be needed
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); //We don't want the old OpenGL

// Open a window and create its OpenGL context
GLFWwindow* _window = glfwCreateWindow(1280, 800, "glRenderWindow!", NULL, NULL);
if (_window == NULL)
{
glfwTerminate();
throw std::runtime_error("Failed to open GLFW window.If you have an Intel GPU, they are not 3.3 compatible.Im sorry!");
}

glfwMakeContextCurrent(_window);

printf("initialized glfw\n");





// ------------------------- initialize GLEW ------------------------------------
glewExperimental = true; // Needed in core profile
if (glewInit() != GLEW_OK)
throw std::runtime_error("Failed to initialize GLEW!");

// make sure opengl v3.2 is used
if (!GLEW_VERSION_3_2)
std::cout << "opengl v3.2 is not available" << std::endl;

printf("initialized glew\n");





// ------------------------- graphics-driver-info -------------------------------
std::cout << "OpenGL version: " << glGetString(GL_VERSION) << std::endl;
std::cout << "GLSL version: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
std::cout << "Vendor: " << glGetString(GL_VENDOR) << std::endl;
std::cout << "Renderer: " << glGetString(GL_RENDERER) << std::endl;





// ------------------------- initialize FRAMEBUFFER -------------------------------
GLuint _framebufferID = 0;
glGenFramebuffers(1, &_framebufferID);
glBindFramebuffer(GL_FRAMEBUFFER, _framebufferID);

GLuint _renderTextureBufferID = 0;
glGenTextures(1, &_renderTextureBufferID);
glBindTexture(GL_TEXTURE_2D, _renderTextureBufferID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1280, 800, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
// Poor filtering. Needed !
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, _renderTextureBufferID, 0);
glBindTexture(GL_TEXTURE_2D, 0);

GLuint _depthTextureBufferID = 0;
glGenRenderbuffers(1, &_depthTextureBufferID);
glBindRenderbuffer(GL_RENDERBUFFER, _depthTextureBufferID);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, 1280, 800);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER,
_depthTextureBufferID);

// Set the list of draw buffers.
GLenum _drawBuffer[1];
_drawBuffer[0] = GL_COLOR_ATTACHMENT0;
glDrawBuffers(1, _drawBuffer); // "1" is the size of DrawBuffers

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
throw std::runtime_error("framebuffer initialization went wrong!");

glBindFramebuffer(GL_FRAMEBUFFER, 0);






// ------------------------- initialize Oculus OpenGL API config -------------------------------
static ovrGLConfig _ovrGLConfig;
memset(&_ovrGLConfig, 0, sizeof _ovrGLConfig);

_ovrGLConfig.OGL.Header.API = ovrRenderAPI_OpenGL;
_ovrGLConfig.OGL.Header.Multisample = 1;
ovrSizei rtsize; rtsize.h = 800; rtsize.w = 1280;
_ovrGLConfig.OGL.Header.RTSize = rtsize;

ovrGLTexture _eyeTexture[2];

for (int i = 0; i < 2; i++) {
_eyeTexture[i].OGL.Header.API = ovrRenderAPI_OpenGL;
_eyeTexture[i].OGL.Header.TextureSize.w = rtsize.w;
_eyeTexture[i].OGL.Header.TextureSize.h = rtsize.h;

_eyeTexture[i].OGL.Header.RenderViewport.Pos.x = 0;// i == 0 ? 0 : hmdDesc.Resolution.w / 2;
_eyeTexture[i].OGL.Header.RenderViewport.Pos.y = 0;
_eyeTexture[i].OGL.Header.RenderViewport.Size.w = rtsize.w;// hmdDesc.Resolution.w / 2;
_eyeTexture[i].OGL.Header.RenderViewport.Size.h = rtsize.h;
_eyeTexture[i].OGL.TexId = _framebufferID; /* both eyes will use the same texture id */

}

// configure SDK-rendering and enable chromatic abberation correction and vignetting
ovrEyeRenderDesc eyeRenderDesc[2];
int dcaps = ovrDistortionCap_Chromatic | ovrDistortionCap_Vignette;

if (!ovrHmd_ConfigureRendering(hmd, &_ovrGLConfig.Config, dcaps, _hmdDesc.DefaultEyeFov, eyeRenderDesc))
throw std::runtime_error("failed to configure distortion renderer");










// ------------------------- FINAL RENDERING LOOP -------------------------------
while (glfwGetKey(_window, GLFW_KEY_ESCAPE) != GLFW_PRESS &&
glfwWindowShouldClose(_window) == 0)
{
ovrFrameTiming hmdFrameTiming = ovrHmd_BeginFrame(hmd, 0);
glBindFramebuffer(GL_FRAMEBUFFER, _framebufferID);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

for (int eyeIndex = 0; eyeIndex < ovrEye_Count; ++eyeIndex)
{
ovrEyeType eye = _hmdDesc.EyeRenderOrder[eyeIndex];
ovrPosef eyePose = ovrHmd_BeginEyeRender(hmd, eye);


/*********** finally draw the szene **********/
// First we clear the drawing surface,
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// First we clear the drawing surface,
//glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(1.f, 1.f, 0.5f, 1.0f);
/**********************************************/


ovrHmd_EndEyeRender(hmd, eye, eyePose, &_eyeTexture[eye].Texture);

// get errors
GLenum err = glGetError();
if ((GLEW_OK != err))
fprintf(stderr, "Error: %s (%d)\n", glewGetErrorString(err), err);
}

// Let OVR do distortion rendering, Present and flush/sync.
ovrHmd_EndFrame(hmd);

// GLFW3 requires that we manually call glfwPollEvents() to update things non-graphical events like key-presses.
glfwPollEvents();
}
}

6 Replies

  • DanB's avatar
    DanB
    Honored Guest
    I haven't looked into how Oculus is using FBOs in the OVR library, but here's a wild guess:

    Inside the library, I spotted:
    glBindFramebuffer =                 (PFNGLBINDFRAMEBUFFERPROC)                 GetFunction("glBindFramebufferEXT"); 

    However, glBindFramebuffer & glBindFramebufferEXT are not aliases for each other, so there could perhaps be issues arising from this.

    For example, from the GL_ARB_framebuffer_object extension in the registry:

    Dependencies on EXT_framebuffer_object

    Framebuffer objects created with the commands defined by the
    GL_EXT_framebuffer_object extension are defined to be shared, while
    FBOs created with commands defined by the OpenGL core or
    GL_ARB_framebuffer_object extension are defined *not* to be shared.
    Undefined behavior results when using FBOs created by EXT commands
    through non-EXT interfaces, or vice-versa.

    #36, September 23, 2013: Jon Leech
    - Specify that undefined behavior results when mixing EXT and
    ARB_framebuffer_object / OpenGL 3.0 API framebuffer objects
    (Bug 10738).


    Oculus really shouldn't be switching ARB and EXT framebuffer function names in this way. It would be okay for most extension functions (check first to see they really are aliases) but not these ones.

    I haven't looked into what OVR::CAPI::GL::DistortionRenderer::GraphicsState::Restore does, but since the error happens there then perhaps you could either:
    1) add the ovrDistortionCap_NoRestore cap if you can restore the state yourself (easy with compatibility profile, virtually impossible in core unless you know exactly what might be modified by code that is not under your control)
    2) use glGenFrameBuffersEXT to generate your FBOs, so that all are using EXT extension (lose extra ARB functionality). Might stop working if OVR switches to use ARB extension instead.
    3) modify OVR library to use ARB framebuffer object extension - might then work on a different range (possibly fewer?) of computers depending on which extensions are supported.


    Edit: Perhaps you should try SDK 0.4.3 instead of 0.3.2 first of all, to see if the problem is already fixed.
  • jeyjey's avatar
    jeyjey
    Honored Guest
    Thanks a lot for your help! But unfortunately nothing worked...

    In Detail:
    with OculusSDK 4.3 still the same problem
    @1) i don't know enough about opengl to restore a state by myself
    @2) didn't work :(
    @3) i dont want to modify OVR-lib

    So i wrote my own distortion shader (got some inspiration from SDK 0.2.5 :) ). It works on windows, linux and mac osx - and that's currently enough for my purposes.
  • We (Oculus) revised the handling of glBindFramebuffer in 0.4.4 (currently in beta testing) to be consistent. There we some related direct mode issues that were also fixed.
  • Syylk's avatar
    Syylk
    Honored Guest
    "paul.pedriana" wrote:
    There we some related direct mode issues that were also fixed.


    Direct Mode for Linux, too?
  • "Syylk" wrote:
    "paul.pedriana" wrote:
    There we some related direct mode issues that were also fixed.


    Direct Mode for Linux, too?


    IIRC they said that wasn't happening, although Linux seems to handle OVR applications much better than Windows does in my experience (especially with Alt+Drag which is a godsend). Although it would be kinda cool to see the SDK implement a way to create a separate X server for the Rift and then force apps over there.
  • gfyffe's avatar
    gfyffe
    Honored Guest
    "paul.pedriana" wrote:
    We (Oculus) revised the handling of glBindFramebuffer in 0.4.4 (currently in beta testing) to be consistent.


    I've noticed that if I want to draw some ordinary OpenGL in my window, after using it for Oculus drawing, I have to call glBindFramebuffer(GL_FRAMEBUFFER, 0) every single event loop before I draw anything. I mean, even when I have completely stopped calling Oculus related functions, I have to call glBindFramebuffer every loop even just to draw a single spinning triangle. I take this to mean the Oculus SDK still has its hands on some OpenGL hooks and it's going ahead and binding some other framebuffer every loop iteration for some reason.

    This still counts as state leak in my book. Is there anything else going on under the hood, that might need cleanup?

    [edit] Oh I just noticed, even though I can do rendering normally after unbinding the frame buffer, my original stencil buffer appears to have been blown away somehow. No stencil operations work, even though they did before drawing on the oculus.

    BTW I'm on windows 7 using direct mode display and open gl.

    [edit] So I made a workaround where I render into another FBO and blit that, which doesn't require a stencil on the window's buffer. That hints to me, that there must be some way to get it working without the extra FBO if I could allocate the OVR framebuffer with a stencil appropriately (though I tried lots of combinations and nothing worked for me.)