help with oculus rift pc sdk
Hello I have been trying for a week now to write a simple app to display an image on oculus rift ( same for both eyes ). OculusTinyRoom is the only sample using OpenGL and it is way too complicated to adapt. I am reaching the point where I am starting to doubt that this library is maintained any longer. Are there better alternatives? The oculus sdk guide is outdated and almost every sample I can find on the net is at least 3 years old, would not compile against the newest sdk, and using an older sdk would often result in blank screens or all kind of errors. I am using C++ / OpenGL / Visual Studio 2017. Can anyone please help me with a link to an updated sample? Thank you !670Views0likes0Commentsovr_CreateTextureSwapChainGL() failed
So every tine I attempt to create the swap chains, I got the error code ovrError_InvalidParameter == -1005 with the following message returned by ovr_GetLastErrorInfo(): BindFlags not supported for OpenGL applications. I am using Oculus DK2 with the SDK 1.20 and latest NVidia drivers. This issue started many months ago, probably since 1.12 or so. The code is a direct copy-paste from the OculusRoomTiny (GL) sample, or from the Texture Swap Chain Initialization documentation: ovrTextureSwapChainDesc desc = {}; desc.Type = ovrTexture_2D; desc.ArraySize = 1; desc.Format = OVR_FORMAT_R8G8B8A8_UNORM_SRGB; desc.Width = w; desc.Height = h; desc.MipLevels = 1; desc.SampleCount = 1; desc.StaticImage = ovrFalse; desc.MiscFlags = ovrTextureMisc_None; // Tried a direct assignment, but still fail desc.BindFlags = ovrTextureBind_None; // Tried a direct assignment, but still fail ovrResult res = ovr_CreateTextureSwapChainGL(session, &desc, &swapTextures); // res is always ovrError_InvalidParameter The initialization sequence is following (I am skipping error checks for clarity): ovr_Detect(0); ovrInitParams params = { ovrInit_Debug | ovrInit_RequestVersion, OVR_MINOR_VERSION, OculusLogCallback, 0, 0 }; ovr_Initialize(¶ms); ovrGraphicsLuid luid; ovr_Create(&hmdSession, &luid); //------------------------------------ // in the function called at WM_CREATE message: <Create core GL context, make current> <init GLEW library> //------------------------------------ hmdDesc = ovr_GetHmdDesc(hmdSession); ovrSizei ResLeft = ovr_GetFovTextureSize(hmdSession, ovrEye_Left, hmdDesc.DefaultEyeFov[0], 1.0f); ovrSizei ResRight = ovr_GetFovTextureSize(hmdSession, ovrEye_Right, hmdDesc.DefaultEyeFov[1], 1.0f); int w = max(ResLeft.w, ResRight.w); // 1184 int h = max(ResLeft.h, ResRight.h); // 1472 for (int eyeIdx = 0; eyeIdx < ovrEye_Count; eyeIdx++) if (!eyeBuffers[eyeIdx].Create(hmdSession, w, h)) // this function's code is provided above { ovr_GetLastErrorInfo(&err); log_error(err.ErrorString); // "BindFlags not supported for OpenGL applications." } So according to the error message, SDK thinks that I am assigned something to desc.BindFlags, while I am not. I tried to directly assign ovrTextureBind_None value to it (which is just zero), but still no success. I traced all variable values in debugger, they are the same as is the OculusRoomTiny (GL) sample. The only difference in my code that I can see is that I am using GLEW library to handle OpenGL extensions, while sample uses OVR::GLE. It initializes it immediately after wglMakeCurrent(): OVR::GLEContext::SetCurrentContext(&GLEContext); GLEContext.Init(); Can this be the cause? But I don't want to switch to Oculus' extensions library. My project is not Oculus-exclusive, it supports Vive and non-VR modes as well. If it is a bug inside libovr, I request Oculus team to fix it!5KViews0likes14CommentsIncluding Oculus OpenGL support in Cuda code
Developing on: Windows 10, 64-bit PC in Visual Studio Community 2015 with Cuda 9.1 and Oculus SDK v1.26.0 I have a body of code I wrote a few years back that successfully compiled and ran using the older Oculus SDK (I think v0.0.8) and the DK2 Rift. I've recently upgraded to the Oculus SDK v.1.26.0 to work with the newest Oculus Rift HMD. In the code, I have my own Cuda code that makes use of Oculus's OpenGL extensions support header (CAPI_GLE.h). This header is built into, for example, the Oculus_RoomTiny example. The problem I am encountering is that when I include this header in my Cuda code (that gets compiled by the nvcc compiler), I get the error: C:\Oculus\OculusSDK_v1.26.0\Install\LibOVRKernel\Src\Kernel/OVR_Win32_IncludeWindows.h(136): error : expected a ")" I've created a sample program that demonstrates the failure. It can be recreated as follows: 1. Take the OpenGL version of the OculusRoomTiny project in the 'Samples' that come with the Oculus SDK 2. Add two new files: my_cuda_parts.h & my_cuda_parts.cu composed as follows: my_cuda_parts.h: #ifndef __MY_CUDA_PARTS_H__ #define __MY_CUDA_PARTS_H__ #define MY_DEBUG 1 #if MY_DEBUG #include "GL/CAPI_GLE.h" #else #endif #include "cuda.h" #include "cuda_runtime.h" #include "device_launch_parameters.h" #include <iostream> int my_fcn(void); __global__ void my_cuda_fcn(int a, int b, int * c); #endif my_cuda_parts.cu: #include "my_cuda_parts.h" int my_fcn(void) { int c; int *dev_c; cudaMalloc((void **)&dev_c, sizeof(int)); my_cuda_fcn << <1, 1 >> > (2, 7, dev_c); cudaMemcpy(&c, dev_c, sizeof(int), cudaMemcpyDeviceToHost); std::printf("2 + 7 = %d\n", c); cudaFree(dev_c); return c; } __global__ void my_cuda_fcn(int a, int b, int * c) { *c = a + b; } And in the main.cpp file of the OculusRoomTiny project: #include "my_cuda_parts.h" and put at the top of the WinMain function: int c = my_fcn(); std::printf("What is %d\n", c); When I include the GL/CAPI_GLE.h header in my_cuda_parts.h, I get the error. If I take that line out, the code compiles and runs correctly. Since this particular header is already included in the Win32_GLAppUtil.h that is part of the OculusRoomTiny project, it appears that when this header is processed by the Cuda compiler (using the nvcc compilation program), this error is triggered. Is there a way to successfully use this Oculus OpenGL header in my custom Cuda code?567Views0likes0CommentsSDK OpenGL samples
Hi, Is the TinyRoom OpenGL sample up-to-date for SDK 1.24? I'm updating my LWJGL3 example, and want to make sure it reflects best practice, especially since I noticed a lot of changes with regard to handling the eye pose. Also is that the only OpenGL sample available? And finally, are the samples tracked on github or the like, so that I can see what has changed since 1.3? Thanks for your time.873Views0likes1Commentovr_CreateTextureSwapChainGL crash with Qt on Windows
Hello folks! I'm currently trying to write a Qt wrapper class for the Oculus Rift. So far, everything went extremely smoothly. However, now I'm hitting a problem where the call to ovr_CreateTextureSwapChainGL() crashes. I did initialize OpenGL and I set the current context but the call to that function keeps crashing the application. Is there anything I need to do other than initializing OpenGL and setting a valid context? Here is my very small wrapper class: I'm using OculusSDK v1.191.6KViews0likes2CommentsMy OpenGL OpenVR application is very laggy
Hi All, I have an OpenGL application that is using OpenVR. (I know Rift is best with Oculus API, but as yet I do not have that option) Using a Vive every thing is fine. But using a Rift the image in the HMD is very very laggy and swimming. I have tried adding glFlush() and glFinish() calls all over the place. I have confirmed this on both nVidia and AMD hardware. Has anybody else seen this? Any suggestions on how to solve it? Thanks1.9KViews0likes4CommentsUsing a single float texture to pass to a GLSL shader (GL_R32F not available)
Hi, a hardware component delivers me single float textures, which I want to pass in the shader to modify my geometry. Unfortunately, I am not sure how I can do that with the Oculus SDK. I'm using the Oculus SDK version 1.8.0 (with OpenGL 4.5). But I’m unsure how I need to change the following line, which works for a 4 uchar texture: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1280, 720, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); I would change the first GL_RGBA to GL_R32F (see doc), the 2nd to GL_RED since it’s one channel and GL_UNSIGNED_BYTE to GL_FLOAT: glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, 1280, 720, 0, GL_RED, GL_FLOAT, NULL); https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml But GL_R32F is not available for me in the Oculus SDK. Can you give me any pointers? I've looked for the definition of it in the SDK, but I couldn't find it. Using GL_RGBA instead just gives me "interpret float as 4 uchar" noise. This is basically a duplicate of my stackoverflow question, but here there are more knowledgable people, I assume. http://stackoverflow.com/questions/43087640/using-a-single-float-texture-in-opengl-with-the-oculus-sdk1.7KViews0likes2Comments