cancel
Showing results for 
Search instead for 
Did you mean: 

ovr_CreateTextureSwapChainGL() failed

SpaceEngineer
Explorer
So every tine I attempt to create the swap chains, I got the error code ovrError_InvalidParameter == -1005 with the following message returned by ovr_GetLastErrorInfo():
BindFlags not supported for OpenGL applications.

I am using Oculus DK2 with the SDK 1.20 and latest NVidia drivers. This issue started many months ago, probably since 1.12 or so.

The code is a direct copy-paste from the OculusRoomTiny (GL) sample, or from the Texture Swap Chain Initialization documentation:



    ovrTextureSwapChainDesc desc = {};
desc.Type = ovrTexture_2D;
desc.ArraySize = 1;
desc.Format = OVR_FORMAT_R8G8B8A8_UNORM_SRGB;
desc.Width = w;
desc.Height = h;
desc.MipLevels = 1;
desc.SampleCount = 1;
desc.StaticImage = ovrFalse;
desc.MiscFlags = ovrTextureMisc_None; // Tried a direct assignment, but still fail
desc.BindFlags = ovrTextureBind_None; // Tried a direct assignment, but still fail

ovrResult res = ovr_CreateTextureSwapChainGL(session, &desc, &swapTextures);
// res is always ovrError_InvalidParameter

The initialization sequence is following (I am skipping error checks for clarity):

    ovr_Detect(0);

ovrInitParams params = { ovrInit_Debug | ovrInit_RequestVersion, OVR_MINOR_VERSION, OculusLogCallback, 0, 0 };
ovr_Initialize(&params);

ovrGraphicsLuid luid;
ovr_Create(&hmdSession, &luid);

//------------------------------------
// in the function called at WM_CREATE message:
<Create core GL context, make current>
<init GLEW library>
//------------------------------------

hmdDesc = ovr_GetHmdDesc(hmdSession);

ovrSizei ResLeft = ovr_GetFovTextureSize(hmdSession, ovrEye_Left, hmdDesc.DefaultEyeFov[0], 1.0f);
ovrSizei ResRight = ovr_GetFovTextureSize(hmdSession, ovrEye_Right, hmdDesc.DefaultEyeFov[1], 1.0f);
int w = max(ResLeft.w, ResRight.w); // 1184
int h = max(ResLeft.h, ResRight.h); // 1472

for (int eyeIdx = 0; eyeIdx < ovrEye_Count; eyeIdx++)
if (!eyeBuffers[eyeIdx].Create(hmdSession, w, h)) // this function's code is provided above
{
ovr_GetLastErrorInfo(&err);
log_error(err.ErrorString); // "BindFlags not supported for OpenGL applications."
}


So according to the error message, SDK thinks that I am assigned something to desc.BindFlags, while I am not. I tried to directly assign ovrTextureBind_None value to it (which is just zero), but still no success. I traced all variable values in debugger, they are the same as is the OculusRoomTiny (GL) sample. The only difference in my code that I can see is that I am using GLEW library to handle OpenGL extensions, while sample uses OVR::GLE. It initializes it immediately after wglMakeCurrent():

    OVR::GLEContext::SetCurrentContext(&GLEContext);
GLEContext.Init();

Can this be the cause? But I don't want to switch to Oculus' extensions library. My project is not Oculus-exclusive, it supports Vive and non-VR modes as well. If it is a bug inside libovr, I request Oculus team to fix it!
14 REPLIES 14

BobasaurusRex
Honored Guest
I'm also having a problem with ovr_CreateTextureSwapChainGL.  I'm not sure its the same issue.   I'm getting a crash deep in the call:

  0000000000000000() Unknown
  LibOVRRT64_1.dll!00007ffee270ae67() Unknown
  LibOVRRT64_1.dll!00007ffee2706715() Unknown
  LibOVRRT64_1.dll!00007ffee26f8d38() Unknown
  LibOVRRT64_1.dll!00007ffee2668162() Unknown
  LibOVRRT64_1.dll!00007ffee2676482() Unknown
> vrvVirtualReality.dll!makVrv::DtOculusProvider::initializeSwapChains() Line 398 C++  <--I call ovr_CreateTextureSwapChainGL here

This is with a new nvidia driver (398.82, happend before I updated my driver as well),  I know my GL context has been created.  I've tried padding out the descriptor memory to 64 bytes.  Descriptor shows as being 40 bytes in size in my application.  

Here's how I'm setting up my descriptor: 

ovrTextureSwapChainDesc desc;
      desc.Type = ovrTexture_2D;
      desc.ArraySize = 1;
      desc.Format = OVR_FORMAT_R8G8B8A8_UNORM;
      desc.Width = myWidth;
      desc.Height = myHeight;
      desc.MipLevels = 1;
      desc.SampleCount = 1;
      desc.StaticImage = ovrFalse;
      desc.BindFlags = 0;
      desc.MiscFlags = 0;

      ovrResult result = ovr_CreateTextureSwapChainGL(mySession, &desc, &mySwapChains);

I'm at a loss as to what to try next.  Any suggestions?

Cheers, 

Bob

BobasaurusRex
Honored Guest
just a bit more info.  It's a null pointer dereference.

SpaceEngineer
Explorer
Update: I used two GPUs, promary display was connected to GTX 1060, while Rift was connected to RX 580. Oculus home and some games works well. Then I re-connected Rift to GTX 1060, and it stopped working (black screen). Physical removing of the RX 580 does not help. Several reboots, reparation of the Oculus installation - and it works. SpaceEngine runs without any errors now.

So problem is probably with bad support of multi-GPU setting, and probably for OpenGL apps only, because other (DX?) games works. Is it possible for Oculus team to work in this field? At least make Oculus software more friendly with hardware change (reinstalling software with 4 GB download every time I want to swap GPU is a "little" inconvenience).

Also, please, remove SSE 4.2 check from the installer! I have some old machines which can barely run VR, but still useful for tests (different hardware, OS). I installed Oculus using old installer there, and it works, so SSE 4.2 does not really needed. Please make at least command line argument to bypass its check.

kcoul
Explorer
I was able to solve this problem where my Alienware 15 needed to negotiate between a Graphics Amplifier / GTX 980Ti and Integrated Graphics by doing the following:
1. NVIDIA Control Panel
2. Manage 3D Settings
3. Global Settings -> Preferred graphics processor: Change from Auto-Select to High-performance NVIDIA processor.

Can't imagine how the system would negotiate between an AMD and Nvidia card!

SpaceEngineer
Explorer
kcoul, if you are coding a project of C++, add this to your stdafx.h:


extern "C" {
__declspec(dllexport) DWORD NvOptimusEnablement = 1; // disable NVidia Optimus
__declspec(dllexport) DWORD AmdPowerXpressRequestHighPerformance = 1; // disable ATI Hybrid Graphics
}


This will tell driver that your OpenGL app needs a normal GPU, not the Intel HD.