Forum Discussion
Vic3Dexe
11 years agoHonored Guest
Win32/GL ovrHmd_ConfigureRendering crash
#define SDK_RENDER 1
HDC dc;
HWND WndHandle;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
ovr_Initialize();
hmd = ovrHmd_Create(0);
//... creating window/context, it working itself
ovrGLConfig cfg;
cfg.OGL.Header.API = ovrRenderAPI_OpenGL;
cfg.OGL.Header.RTSize = OVR::Sizei(hmd->Resolution.w,hmd->Resolution.h);
cfg.OGL.Header.Multisample = 0;
cfg.OGL.Window = WndHandle;
cfg.OGL.DC = dc;
ovrFovPort eyeFov[2] = {hmd->DefaultEyeFov[0],hmd->DefaultEyeFov[1]};
ovrEyeRenderDesc EyeRenderDesc[2];
ovrHmd_ConfigureRendering(hmd,&cfg.Config,
ovrDistortionCap_Chromatic|ovrDistortionCap_TimeWarp|ovrDistortionCap_Overdrive,
eyeFov,EyeRenderDesc); //<--------- crash, stack fault, C000005 (access violation)
...
}
What am I doing wrong?
Can't find solution neither in 'oculus_developer_guide.pdf' nor in samples (they for D3D11 mostly).
ovrHmd_AttachToWindow(hmd,WndHandle,0,0) - not helping.
14 Replies
- datenwolfHonored GuestA stack backtrace would really help, so that we can pin down at which level down the rabbit hole that is ovrHmd_ConfigureRendering it tries to dereference an invalid pointer.
BTW, how does the code for creating the window look like? If you want to safe yourself the trouble of implementing a proper two step OpenGL context creation system (which you have to do to obtain advanced pixel formats and context attributes), have a look at my simplicistic wglarb library https://github.com/datenwolf/wglarb - Vic3DexeHonored Guest
"datenwolf" wrote:
A stack backtrace would really help
How I can get it with MinGW? Never need it before :)"datenwolf" wrote:
how does the code for creating the window look like?
I'm creating simple OpenGL 1.1 context.
dc = GetDC(WndHandle);
PIXELFORMATDESCRIPTOR pfd;
memset(&pfd,0,sizeof(pfd));
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = GetDeviceCaps(dc,BITSPIXEL)*GetDeviceCaps(dc,PLANES);
pfd.cDepthBits = 24;
pfd.iLayerType = PFD_MAIN_PLANE;
int pf = ChoosePixelFormat(dc,&pfd);
if (!SetPixelFormat(dc,pf,&pfd)) RAISE("Can't set pixelformat %d",pf);
glrc = wglCreateContext(dc);
if (!glrc) RAISE("wglCreateContext fail");
wglMakeCurrent(dc,glrc);"Vic3Dexe" wrote:
which you have to do to obtain advanced pixel formats and context attributes
Where was said that only advanced (core 3.0/4.0) context will work?
Didn't find anything about it in developer guide. BTW, anything about 'only D3D11' too.
If problem is in context version, so this is stupid. There is no word about limitations in guide, no error was returned - just crash.
BTW, from guideThere are no specific computer hardware requirements for the OculusSDK
so I can use, for example, GF2 MX400, with no GL3/4 at all. - datenwolfHonored GuestI don't like this line in your code:
pfd.cColorBits = GetDeviceCaps(dc,BITSPIXEL)*GetDeviceCaps(dc,PLANES);
The pixelformatdescriptor is used as a specification of minimal requirement you have, when querying for a PFD index to be set.
By querying using the values the device is currently set to, you may limit the available choices ChoosePixelFormat has available. Im thinking about the case, that your system gives you a value of 32 there.
Rendering to a 16 bit per pixel framebuffer opens a can of worms; thanks to the variety of texture formats modern GPUs can handle (also as a FBO color attachment) the performance hits are not nearly as bad as they used to be for anything that's not 8 bits per color channel.
My recommendation is: Just hardcode 24 as the minimum color depth you request.
Another thing you should look at is the WNDCLASS or WNDCLASSEX of your OpenGL window. Make sure it has the CS_OWNDC bit set.
EDIT: You can use the Rift with legacy OpenGL just fine as well. Just be aware, that the SDK gives you a combination of projection and view matrices, which you have to load into the matrices use instead of preparing them yourself.
EDIT2: Stack backtrace with MinGW. You have the GNU Debugger gdb. Compile your programs compilation unit with addition of debug symbols (-g command line switch). Then run in the debugger> gcc -o program -g source.c ...
> gdb ./program
(gdb) run
.
.
.
~crash~
(gdb) backtrace - Vic3DexeHonored GuestChanging to 24 bits...
pfd.cColorBits = 24;//GetDeviceCaps(dc,BITSPIXEL)*GetDeviceCaps(dc,PLANES);
Setting CS_OWNDC...
#define W_STYLE WS_OVERLAPPEDWINDOW+WS_CLIPCHILDREN+WS_CLIPSIBLINGS
WNDCLASS WC;
memset(&WC,0,sizeof(WC));
WC.hCursor = LoadCursor(0,IDC_ARROW);
WC.style = CS_OWNDC;
WC.lpfnWndProc = &WndProc;
WC.hInstance = hInstance;
WC.hbrBackground = GetSysColorBrush(0);
WC.lpszClassName = CLASS_NAME;
RegisterClass(&WC);
WndHandle = CreateWindowEx(0,CLASS_NAME,"test",W_STYLE,10,10,100,100,0,0,WC.hInstance,0);
And still crash...
(gdb) run
[New Thread 6888.0x2d38]
[New Thread 6888.0x3798]
[New Thread 6888.0x2bf4]
[New Thread 6888.0x2870]
Program received signal SIGSEGV, Segmentation fault.
0x00000000 in ?? ()
(gdb) backtrace
#0 0x00000000 in ?? ()
#1 0x00413b07 in OVR::CAPI::HMDState::ConfigureRendering(ovrEyeRenderDesc_*, ov
rFovPort_ const*, ovrRenderAPIConfig_ const*, unsigned int) ()
#2 0x004032d3 in ovrHmd_ConfigureRendering ()
#3 0x00401a9e in WinMain@16 (hInstance=0x400000, hPrevInstance=0x0,
lpCmdLine=0x553498 "", nCmdShow=10) at src/main.cpp:132
#4 0x0045ebed in main ()
(gdb) - nuclearExplorerHello!
I noticed that the code you posted initially doesn't have any error checking for the return values of ovr_Initialize or ovrHmd_Create. Did you just remove that for brevity just for the post or is your code actually like that? Make sure you check for possible failures there.
Especially since this:"Vic3Dexe" wrote:
Program received signal SIGSEGV, Segmentation fault.
0x00000000 in ?? ()
(gdb) backtrace
#0 0x00000000 in ?? ()
#1 0x00413b07 in OVR::CAPI::HMDState::ConfigureRendering(ovrEyeRenderDesc_*, ovrFovPort_ const*, ovrRenderAPIConfig_ const*, unsigned int) ()
... is definitely a call to a null function pointer, which stinks like some uninitialized global/static OpenGL extension entry point is being called. So I guess glBindBuffer or something like that is null inside the SDK code (haven't looked at the code of ConfigureRendering, but it's pretty much a dead giveaway)."Vic3Dexe" wrote:
#define W_STYLE WS_OVERLAPPEDWINDOW+WS_CLIPCHILDREN+WS_CLIPSIBLINGS
Btw this is a really strange way to combine bitmasks... It will work, but ORing them together is the obvious and much simpler operation to use. - datenwolfHonored GuestOn second thought, yes, definitely the OpenGL extensions are not loaded, and the OVR SDK wants to use them. Honestly, that's a huge messup on OVRs side, because libraries must be self sufficient. If they depend on something externally being initialized something is off.
Don't use addition to combine bitfield flags. Some tokens may define bits other tokens define as well and arithmetic addition will mess things up completele. Imagine those two bitfields:
A = b0010011
B = b0000010
If you add them, the result would be
A + B =
b00000011
b00010010
-------------
b00010101
Which is obviously not what you want. Bitfields are combined using the bitwise OR operator "|"
A | B =
b00000011
b00010010
-------------
b00010011
which gives a very different result.
Now that window style define you have there totally violates this.
Another issue is, that you set a solid background brush for your window. Don't do this for OpenGL windows. It just causes massive flicker because everytime the system invalidates the window it first clears the window visibly to the solid color without double buffering, before the actual WM_PAINT message is sent, upon which you redraw. Just use a NULL pointer.
Here's a suggestion: In my wglarb library you can find a test program, that creates an OpenGL window, that if Aero is enabled is "see through". Feel free to harvest that code or to plug in the SDK there.
https://github.com/datenwolf/wglarb/blob/master/test/layered.c
Unfortunately my development computers HDDs went faulty last weekend. So I decided to switch to an SSD, but that meant a major change in disk and partition layout, so a simple backup restore doesn't suffice. Windows is installed back already, but right now my Linux distribution is busy installing. I'll add an ovrtest program to the wglarb tests as soon as I find time for it. - nuclearExplorer
"datenwolf" wrote:
On second thought, yes, definitely the OpenGL extensions are not loaded, and the OVR SDK wants to use them. Honestly, that's a huge messup on OVRs side, because libraries must be self sufficient. If they depend on something externally being initialized something is off.
I don't think they depend on something externally being initialized. We don't have access to their OpenGL extension entry point function pointers anyway. ovr_Initialize initializes the internal function pointers used by the oculus SDK. That's why I've drawn the original poster's attention to the fact that he's not checking the result of ovr_Initialize. Maybe that has failed for whatever reason, and the function pointers are left null. - Vic3DexeHonored Guest
"nuclear" wrote:
I noticed that the code you posted initially doesn't have any error checking for the return values of ovr_Initialize or ovrHmd_Create. Did you just remove that for brevity just for the post or is your code actually like that? Make sure you check for possible failures there.
I have checking for ovrHmd_Create, it working fine (cout<<...resolution gives me 1920x1080)."nuclear" wrote:
... is definitely a call to a null function pointer, which stinks like some uninitialized global/static OpenGL extension entry point is being called.
It is definitely possible. GL init is done by sequence of calling to wglGetProcAddress, but how do I know which function is needed for ovrHmd_ConfigureRendering? For now I'm initializing this:
//variables declaring skipped
#define setproc(n,t) n = (t)wglGetProcAddress(#n)
void InitGLExt(void)
{
setproc(glIsShader,PFNGLISSHADERPROC);
setproc(glGetShaderiv,PFNGLGETSHADERIVPROC);
setproc(glGetProgramiv,PFNGLGETPROGRAMIVPROC);
setproc(glGetShaderInfoLog,PFNGLGETSHADERINFOLOGPROC);
setproc(glGetProgramInfoLog,PFNGLGETPROGRAMINFOLOGPROC);
setproc(glCreateShader,PFNGLCREATESHADERPROC);
setproc(glCreateProgram,PFNGLCREATEPROGRAMPROC);
setproc(glDeleteShader,PFNGLDELETESHADERPROC);
setproc(glDeleteProgram,PFNGLDELETEPROGRAMPROC);
setproc(glAttachShader,PFNGLATTACHSHADERPROC);
setproc(glDetachShader,PFNGLDETACHSHADERPROC);
setproc(glShaderSource,PFNGLSHADERSOURCEPROC);
setproc(glCompileShader,PFNGLCOMPILESHADERPROC);
setproc(glLinkProgram,PFNGLLINKPROGRAMPROC);
setproc(glValidateProgram,PFNGLVALIDATEPROGRAMPROC);
setproc(glUseProgram,PFNGLUSEPROGRAMPROC);
setproc(glBindAttribLocation,PFNGLBINDATTRIBLOCATIONPROC);
setproc(glGetUniformLocation,PFNGLGETUNIFORMLOCATIONPROC);
setproc(glProgramUniform1i,PFNGLPROGRAMUNIFORM1IPROC);
setproc(glUniform1iv,PFNGLUNIFORM1IVPROC);
setproc(glUniform2fv,PFNGLUNIFORM2FVPROC);
setproc(glUniform3fv,PFNGLUNIFORM3FVPROC);
setproc(glUniformMatrix4fv,PFNGLUNIFORMMATRIX4FVPROC);
setproc(glCompressedTexImage2D,PFNGLCOMPRESSEDTEXIMAGE2DPROC);
setproc(glActiveTexture,PFNGLACTIVETEXTUREPROC);
setproc(glGenBuffers,PFNGLGENBUFFERSPROC);
setproc(glDeleteBuffers,PFNGLDELETEBUFFERSPROC);
setproc(glBindBuffer,PFNGLBINDBUFFERPROC);
setproc(glBufferData,PFNGLBUFFERDATAPROC);
setproc(glEnableVertexAttribArray,PFNGLENABLEVERTEXATTRIBARRAYPROC);
setproc(glDisableVertexAttribArray,PFNGLDISABLEVERTEXATTRIBARRAYPROC);
setproc(glVertexAttribPointer, PFNGLVERTEXATTRIBPOINTERPROC);
setproc(glVertexAttrib1f,PFNGLVERTEXATTRIB1FPROC);
setproc(glVertexAttrib2f,PFNGLVERTEXATTRIB2FPROC);
setproc(glVertexAttrib4f,PFNGLVERTEXATTRIB4FPROC);
setproc(glVertexAttrib4fv,PFNGLVERTEXATTRIB4FVPROC);
setproc(glGenFramebuffers,PFNGLGENFRAMEBUFFERSPROC);
setproc(glBindFramebuffer,PFNGLBINDFRAMEBUFFERPROC);
setproc(glDeleteFramebuffers,PFNGLDELETERENDERBUFFERSPROC);
}"nuclear" wrote:
Btw this is a really strange way to combine bitmasks... It will work, but ORing them together is the obvious and much simpler operation to use."datenwolf" wrote:
Don't use addition to combine bitfield flags. Some tokens may define bits other tokens define as well and arithmetic addition will mess things up completele. Imagine those two bitfields:
Guys, I'm familiar with binary math, really. I known that in common case A+B != A|B. It is working with '+' but thanks, I'll change these to ORs."datenwolf" wrote:
Another issue is, that you set a solid background brush for your window. Don't do this for OpenGL windows. It just causes massive flicker because everytime the system invalidates the window it first clears the window visibly to the solid color without double buffering, before the actual WM_PAINT message is sent, upon which you redraw. Just use a NULL pointer.
Never noticed that but thanks again.
About all this stuff (ORs, background) - it is just test app to make it working, so I don't bothering quality or performance issues."datenwolf" wrote:
Here's a suggestion: In my wglarb library you can find a test program, that creates an OpenGL window, that if Aero is enabled is "see through". Feel free to harvest that code or to plug in the SDK there.
Thanks but I prefer to get it to work with raw code first (as low level as possible). Tried fasm :) but static libraries linking was harder than I expected... - nuclearExplorer
"Vic3Dexe" wrote:
"nuclear" wrote:
I noticed that the code you posted initially doesn't have any error checking for the return values of ovr_Initialize or ovrHmd_Create. Did you just remove that for brevity just for the post or is your code actually like that? Make sure you check for possible failures there.
I have checking for ovrHmd_Create, it working fine (cout<<...resolution gives me 1920x1080).
And what about ovr_Initialize?"Vic3Dexe" wrote:
"nuclear" wrote:
... is definitely a call to a null function pointer, which stinks like some uninitialized global/static OpenGL extension entry point is being called.
It is definitely possible. GL init is done by sequence of calling to wglGetProcAddress, but how do I know which function is needed for ovrHmd_ConfigureRendering? For now I'm initializing this:
You missunderstood me. It's not up to you to give the function pointers to libovr. It's doing that itself. I was just pointing out that if this libovr initialization failed (which you're not checking, and have no idea if it did or not) and left their own internal opengl function pointers null, that might be what's causing the null pointer indirection and crash. - datenwolfHonored Guest
"nuclear" wrote:
That's why I've drawn the original poster's attention to the fact that he's not checking the result of ovr_Initialize. Maybe that has failed for whatever reason, and the function pointers are left null.
Quite possible, but then this is an implementation flaw, IMHO. If you take a look at my OpenGL helper libraries, most of them are initialization-less and all of them check if they're properly initialized everywhere, so that the program doesn't get crashed in case of misinitialization. In the case of the OVR libraries, I suggest that all function pointers should be placed in static TLS and initialized to a No-Op return stub (that also sets an error flag); it's what I do in my programs.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 4 months ago
- 2 years ago
- 2 years ago