Designated Photogrammetry App?
When you open up a mixed reality app on the Quest 3, it has you take a scan of your space so the game knows where to place things in the 'real world'. When I saw this happening for the first time I was mesmerized. The room I was in was very quickly translated to untextured polygons and only got more accurate the more I looked around things, for example, my fan went from a large cylinder, to an extremely accurate render of my fan as I went above and around it. I would love to see if the Quest 3 could even house a photogrammetry app, seeing as the depth sensors could make seriously accurate models in real time. If It did exist, I personally wouldn't change a thing about the model rendering portion of it, but to add a texture mapping second part would also be so cool. This is just an idea as I have no idea how to code, but I'm not above starting if someone could point me in the right direction because well, I have no idea where I'd start with making software for the Quest 3.1.4KViews1like1CommentRender Texture used in cinematics persisting after scene change
When I run my game on a simulated environment through the Oculus Link, everything works fine. However when I build the app and run it on the Quest 2, the view from the previous cinematic persists for the first few frames. I have already tried these methods: RenderTexture rt = UnityEngine.RenderTexture.active; UnityEngine.RenderTexture.active = myRenderTextureToClear; GL.Clear(true, true, Color.clear); myRenderTextureToClear.Release(); UnityEngine.RenderTexture.active = rt; Yet the problem hasn't been solved. Are there any android specific things that I should know about?650Views0likes0CommentsUE4 to Oculus Go - normal maps not working
I've created a sprite sheet with diffuse and normal map textures. In UE4 in order to get paper2D sprites with normal map you have to use a material called something like Default Lit Sprite Material. I copied it and have setup my flipbook animation of my sprites. Looks very neat in UE4. In both the main viewport and the flipbook animation viewport I can see the normal map working on the sprite. In the flipbook viewport you can move a temporary light around by holding "L" to see the specular reflections of your normal map to verify. When I launch this project to the oculus go the model appears flat and only the diffuse texture appears to be rendering on the sprite. I've disabled all lights except one that is position off to one side of the sprite so it should draw and strong specular reflection from the right. When I am in the Go and looking at the sprite all I can see is the diffuse map. No specular accents describing the normal map. At this point I don't know what else to do because there is no more documentation on this.1.1KViews0likes1CommentColor "bleeding" between the eyes on the image rendered by the compositor?
Hi, I use my own Ogre/OpenGL based engine to render for the Rift. Like the example on the SDK, I decided for simplicity to use a shared texture to render both of my eye-camera views. I never put a lot of thoughts on this(because it's basically invisible from the inside of the Rift) but I want you to look at theses images and tell me what you think : This is just a (badly :wink: ) grass textured plane, with a "blue sky" clear color on the viewports, the 1st image is a copy of the mirror texture returned by the SDK, the 2nd is the content of the texture sent to the SDK with a 16:9 ratio (that doesn't fit well on the window, hence the 2 left and right side. It's just to visualize what's happening here) On the VR distorted image (1st one), if you look at the center division between the eyes, you can clearly see a small blue line on the right eye from the sky of the left eye, and a slight green line on the left from the right. This started to bother me a little. I think I read about that like, 1 year ago on this forum or on on the SDK documentation (I can't find it again) that is was recommended to let some kind of gap between the 2 eyes on the textures, so I added a few (okay, 100) pixels to the width of the texture I render to, adjusted the content of the struct that describe the Layer_EyeFov used, and the result is what you would expect (same image configuration here) : The FoV used is the Default one (not the maximum one) so when the headset is moved quickly you can clearly see the border of the mirrored image moving (I assume this is the time warp re projection kicking in). The render target has 8x MSAA on it (probably not relevant, but that's part of how theses images have been generated). So, my fellow developers, did you observe this color bleeding effect, dos that bother you? Also I'm interested to know how do you configure your render texture(s) Shared buffer for both eyes, no "gap" between them Shared buffer for both eyes, with a gap to act as a "bumper" between them (how large is it?) Different textures (Is there a practical reason to use 2 textures instead of one. I understand that the developer guide use a shared one "for the sake of simplicity" for the code spinet inside. But I don't see why you would use separated textures actually. I'm a noob in computer graphics, so please, enlighten me :smiley: )630Views0likes0CommentsMonoscopic background with render texture ?
I can't find information on how to achieve monoscopic background with RT applied to a quad. I was able to do it without trouble some time ago then something went wrong. No matter what it try i get weird results. Is this a known issue with actual Unity (5.3.4p5)/OVR Utils versions, or does someone have some advices ?2.6KViews0likes13Comments