with the new update everything looks horrible I've been playing VR for years and foveated rendering
I need help because the foveated rendering render in the PC is ruined the VR experience. i have a good PC I dont need foveated rendering with the new update everything looks horrible I've been playing VR for years and foveated rendering is giving me a headache Necesito ayuda porque el renderizado foveated en la PC arruinó la experiencia de realidad virtual. Tengo una buena PC. No necesito renderizado foveado. con la nueva actualización todo se ve horrible Llevo años jugando a la realidad virtual y el renderizado foveado me está dando dolor de cabeza1.2KViews0likes0CommentsQuest Avatars appear only skin colored.
In Oculus Integration 1.39, we noticed an issue with Avatars on Quest. The entire Avatar is rendering as skin color with some strange artifacts. It appears as if the UV maps have been messed up on the Avatar textures. I can confirm the Oculus Avatar id is being assigned properly and that Oculus is returning the proper Avatar. However, the Avatars are pretty nightmarish. If anyone has any tips for fixing this issue, that would be great. Thank you!582Views0likes0CommentsLens distortion model for DirectX raytracing based rendering?
I'm interested in experimenting with VR rendering using DirectX raytracing where the Rift lens distortion could be directly corrected in the ray generation shader rather than rendering an undistorted image relying on the Oculus runtime to then warp the texture in a postprocess. Looking at the current SDK I don't see any way to have a layer that has no distortion correction applied by the runtime or any way to get the lens distortion model. I remember in the distant past both of these kind of existed (there were undistorted layers and a way to get a distortion mesh) but I can't find them in the current SDK docs (and I may be mis-remembering what was provided before). Are there any plans to provide SDK access to allow for this kind of pre-distorted ray traced rendering now that real time raytracing API and hardware support is on the horizon?2.1KViews0likes7CommentsUpdating an app from previous (0.8) Rift libraries to current (1.17) libraries causes screen flicker
I've updated from 0.8 to 1.17, rewritten for the texture swap chain, etc.. but my application now flickers constantly while running. Any ideas what might cause this? Hardware is a GEForce 1060 w/ 6 Gb, i7-6700 @ 2.6 Ghz, 16 Gb RAM.. no reason there should be any drop in framerates or anything.460Views0likes0CommentsBubbles in the GPU queue at the start because of ovr_submitFrame
https://developer3.oculus.com/documentation/pcsdk/latest/concepts/dg-hud/ says: Compositor Frame-rate The rate of the final composition; this is independent of the client application rendering rate. Because the compositor is always locked to V-Sync, this value will never exceed the native HMD refresh rate. But, if the compositor fails to finish new frames on time, it can drop below the native refresh rate. The native engine I'm working on calls `ovr_submitFrame` from the render thread, and is currently designed such that the next frames work (sim + render prep + submit) starts only after `present` is called on the render thread. GPUView tells me that `libOVR*.dll` waits on a sync object, resulting in the CPU doing literally nothing during a 3-4 ms period. This happens every frame (App CPU frame time is generally 5-7 ms, App GPU frame time is 10-13 ms), and results in bubbles in the GPU Queue at the start of the next frame. When the render thread does resume execution, the stack in GPUView shows a bunch of kernel/os stuff for ~1.5 ms before `libOVR*.dll` shows up in the stack and my engine only then gets back control flow. I can't quite explain what that's about. In the pic below, notice: (a) the CPU render thread being idle because of a sync object in libOVR*.dll (b) small bubbles in the GPU queue at the start for the next frame as a result My questions are: (1) Does `ovr_submitFrame` return only after the HMD's vblank? If so, why doesn't the SDK let the app query time to vblank and throttle itself accordingly? (2) When the sync object wait returns, it still takes ~1.5 ms (from GPUView) before I see a call stack with the engine's symbols. Why this extra time? (not shown in pic) (3) More importantly, my engine should not be throttled by the render thread's inability to present and should kick off sim work for the next frame (at least) a few ms before the render thread gets back control flow. What is the recommended way (API to use) to understand when to start work for the next frame?686Views0likes0CommentsOpenGL simple program
Hello guys! I am fairly new to OpenGL and Oculus VR. I am trying to build a cubemap for the rift. I already have a program running in OpenGL in C++ but I don't understand how to render the cubemap into the oculus rift. I took a look at the tiny room gl program given by the sdk but I don't entirely understand how the rendering happens. Has anyone done a tutorial or a project where you focus on practicing just the rendering? Can anyone help me understand how to display something into the oculus?734Views0likes0CommentsHow to implement correct aspect ratio camera passthrough in Oculus
Hi, I am trying to implement camera passthrough, I need passthrough to be as close to our natural eye as possible. I know I need webcams that have about same FOV as Oculus, ~100 degree. And the webcams should be as close to human eye as possible. What confuses me is at how far in the VR world should the captured webcam be rendered? If I render too far, passthrough is just a small rectangle, if too close, entire FOV cannot be seen. So how to determine the right distance to render to match FOV of webcam and Oculus, so that passthrough shows as naturally as possible to a natural eye? Is doing a calibration the only solution? Can anyone point to how might a calibration be? Thank you.1.1KViews0likes2CommentsCan I just blit a texture into the Oculus backbuffer?
I'm integrating Oculus into my own OpenGL engine, and trying to figure out if this technique will work. Essentially, I'm rendering the left eye and right eye views into my own backbuffer, and then directly blitting the whole thing into the Oculus backbuffer like this (where "backbuffer" is my input texture): // Get the render target for this frame int currentIndex = 0; ovr_GetTextureSwapChainCurrentIndex(mOVRSession, mOVRTextureSwapChain, ¤tIndex); GLuint currentRenderTargetTextureId; ovr_GetTextureSwapChainBufferGL(mOVRSession, mOVRTextureSwapChain, currentIndex, ¤tRenderTargetTextureId); // Attach current texture to framebuffer glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mFramebufferId); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, currentRenderTargetTextureId, 0); // Clear target glClearColor(0.5f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // Copy input texture to framebuffer GLuint inputFramebufferId = backbuffer->GetFrameBufferHandle(); glBindFramebuffer(GL_READ_FRAMEBUFFER, inputFramebufferId); glBlitFramebuffer( 0, 0, mBackbufferSize.w, mBackbufferSize.h, 0, 0, mBackbufferSize.w, mBackbufferSize.h, GL_COLOR_BUFFER_BIT, GL_NEAREST); // Submit texture to HMD ovr_CommitTextureSwapChain(mOVRSession, mOVRTextureSwapChain); ovrLayerHeader* layers = &mMainLayer.Header; ovr_SubmitFrame(mOVRSession, mCurrentFrameIndex, nullptr, &layers, 1); When I look in the HMD, I'm finding that half of my backbuffer is present as a screen floating in space ninety degrees to the left in a black void. Which is kind of weird! So, two questions: 1. Is this possible to do at all, or is my only option getting the backbuffer from OVR and rendering my scene directly into it? 2. If this is possible, what could lead to this situation? Layer setup, perhaps? My layers are an exact copy of the eyeFov example from the SDK docs.1.7KViews0likes6CommentsMissing documentation of high-quality / mipmapped rendering features
The current documentation is very vague when it comes to the high-quality layers feature. In particular I cannot find any good documentation of: ovrTextureSwapChainDesc.MipLevels: What does this change exactly and why is it set to 1 in all the basic examples which apparently do not use mipmapped textures? What value is recommended here for a high-quality layer? All I know is that I get a black screen when increasing this to 2. ovrTextureSwapChainDesc.SampleCount: The only information I could find is "Current only supported on depth textures", which confuses me since I wasn't even aware that I can pass depth textures. As far as I can see we always pass color textures anyway? ovrTextureMisc_AllowGenerateMips: The misc flags of ovrTextureSwapChainDesc allow to pass this flag. I haven't found anything on that yet. What does it do and is this a requirement for a high-quality layer? In OpenGL does this take care of calling glGenerateMipmap for instance? What other things are required to get high quality layers to work. For instance in OpenGL, am I supposed to set glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR) myself or does the SDK take care of this? Similarly, am I supposed to call glGenerateMipmap at the end of my rendering or is this already part of submitting a frame? It would be great to get some more information on this because it currently requires a lot of guessing to get a high quality layer to work.1.9KViews0likes7Comments