Conflict combining Projection and Cubemap layers using Vulkan
In my current project, based on the native Mobile SDK and Vulkan, I'm attempting to combine a regular Projection layer with a Cubemap layer. As far as I know, everything is set up correctly - from the creation of the swap chains, filling them with images, up until the creation & submitting of the layers themselves. Something is wrong, however - I can either render the Projection layer or the Cubemap layer, but not both. (Just to be clear: I *did* remember setting .LayerCount = 2 in the ovrSubmitFrameDescription2 instance, for the test. Also, I *did* remember to specify the cubemap layer first, and the projection layer second). Since both can be rendered separately, but not together, I can only think I misconfigured the swap chains somehow, so one blocks the other in some way. Where should I begin checking things in my code to attempt to find out what's going on? So far, I've been unable to find examples of code that performs this exact task using Vulkan - where is a good place to find such examples?437Views0likes0CommentsStereoLayer Cubemap stereo working on Quest, not on Rift
Hello there, I have successfully used cubemap stereo layers in the Oculus Quest and it works perfectly. But when I tried to use cubemap stereo layers for the Rift, It shows a small rectangle with portion of the cubemap and everything on the sides is black. Oculus says that VR Composition Layers do work on the Rift but I havent been able to make it work. Does anyone know what might be wrong with my project? I am using 4.22 but I have tried in 4.24 with no success. Thank you!709Views0likes1CommentLatest Oculus App version: Seams in OVROverlay cubemaps
For our scene transitions, we enable the OVROverlay Cubemap before asynchronously loading. We received feedback that a user was seeing missing front & back faces and visible seams in the cubemap loading screen. We were not able to reproduce until one dev machine updated the Oculus App (1.22.0.520720) and the issue was reproducible in the Unity editor and in current and previous builds. Updating Oculus App on another dev machine produced the same results. The cubemap fails to completely overlay the render buffer, showing seams at all of the cube edges 100% of the time, and occasionally missing front and back faces. Is there any documentation of this problem that I have not found? Trying to narrow down SDK and engine versions which cause this issue. Using latest Unity 5.6. Updating to the latest Oculus Unity Utilities (1.22) results in a completely broken overlay (not visible at all).784Views0likes1CommentVR Compositor Cubemap display issue
Hello, Having a strange display issue with VR compositor cubemap in my game loader scene, the same code worked before so wondering if something changed in the driver/runtime.. (using Unity 5.4.5p4 - same problem in 5.4.4p3 - and Utilities 1.15). The cubemap OVR overlay now appears only in periphery of vision (almost as if there was a near clipping plane problem), so only clipped sections of the cubemap walls are visible, depending on headset orientation. Other compositor layers are not affected, removing them doesn't change anything, changing camera clipping plane distances doesn't affect it either. Tested all settings for the Cubemap texture, to no avail. Does this symptom ring a bell for anyone?558Views0likes0CommentsOpenGL simple program
Hello guys! I am fairly new to OpenGL and Oculus VR. I am trying to build a cubemap for the rift. I already have a program running in OpenGL in C++ but I don't understand how to render the cubemap into the oculus rift. I took a look at the tiny room gl program given by the sdk but I don't entirely understand how the rendering happens. Has anyone done a tutorial or a project where you focus on practicing just the rendering? Can anyone help me understand how to display something into the oculus?729Views0likes0CommentsHighest quality 360 video in Rift and Gear VR created with Unity, what is possible, what do we know?
Honestly I am confused when it comes to the state of 360 video (lets talk monoscopic for the sake of this discussion for now). We use Rift CV 1 at live events and Gear VR for field use with our clients. So far we have been navigating the novelty VR wave very well (pharma and education) but as of late we are seeing push backs and concerns especially in terms of video quality. We develop all our solutions in Unity at them moment utilizing AVPro to project a 4k equirectangular video file onto a sphere. In the past we also have successfully used the Easy Movie Texture. Video plays back fine with both solutions, but the fact remains that I am zooming into a 4k video with less than 720p quality remaining in front of my eyes. I am talking especially about text within videos which always seems blurry and pixelated. Here is the thing: I have seen better qualities by now in players like Little Star's and some stuff I have seen at OC3. So what is the next step here? Cubemaps, adaptive dynamic streaming all which have been mentioned at OC3 again, but then it is very sparse to find anything about it here or on the web in general. I am currently planning a big 360 shoot for a project kicking off next month and I am confused as if we need more than 4k to support adaptive ideas. There are two particular scenarios I am concerned about and interested in what other are doing to solve these issues: A: There is a very interesting discussion on this forum where to author talks about a Penguin scene and mentions that we technically could only use video for the moving segments of a scene. So lets say we look at the scene as a cubemap and we would only use one side as a 4k video segment and the rest as still frames, this could be of course also done in a sphere with some transparency shaders. Did somebody do this successfully and what camera would you use for such a setup as a 4k Gear 360 cannot deliver more than 4k in total. B: More importantly we have moving scenes. e.g. driving a convertible car. These scenes don't easily qualify for the A solution since we have movement everywhere. This is where above mentioned adaptive ideas come to mind where we show selective parts of the videos based on the users head position, this would require many video versions and some good prediction algorithm. I want to point out that I want to make the discussion about quality and not about size in terms of streaming or size of the applications. What have you done and what is available to "normal" people or even via licensing models at this moment. Just to throw it out there: Are we stuck with Unity here? I hope not but I want to keep this discussion open. I want to thank anyone who found the time to read through this long post and hope that we can get a discussion going here, even if you just shoot me a bunch of links. I pledge that I will keep this post updated with findings and approaches we will implement to solve the quality issue to push the quality of VR forward and strengthen our sales pitch.1.8KViews0likes3CommentsCube Vs Sphere for Pano projection
I have always used a sphere with inverted normals to project my equirectangular panos and show them in a VR scene with the camera in the center of the sphere. Another option is to use a cube with a Skybox/cubemap material to project the same pano (converted from equirectangular to cube facing). Visually I see the same quality, but in terms of performance, with the cube I only have 12 tris, instead of the large number of triangles that I get with a sphere. My question is… ¿should I always use cubes and forget to use spheres to show panos? Thanks for your comments2.8KViews0likes4Comments