Forum Discussion
thewhiteambit
7 years agoAdventurer
ovrLayerEyeFovDepth strange results when compositing with ovrLayerQuad
I am using DirectX swapchain texture with "ovrTextureBindFlags::ovrTextureBind_DX_DepthStencil" and "ovrTextureFormat::OVR_FORMAT_D32_FLOAT"
when I copy from a regular w/z-depthbuffer created by the DX-pipeline with projection matrix provided by the Oculus SDK (with none of the optional parameters), it is all fu with depth only in a very small area.
when I linearize the depthbuffer to real Z-depth, it somehow works, but the depth is still all fu by an offset.
when I linearize the depthbuffer to [0-1] depth, is fu like with no linearize
since the documentation says nothing about the expected format, I would appreciate any help
when I copy from a regular w/z-depthbuffer created by the DX-pipeline with projection matrix provided by the Oculus SDK (with none of the optional parameters), it is all fu with depth only in a very small area.
when I linearize the depthbuffer to real Z-depth, it somehow works, but the depth is still all fu by an offset.
when I linearize the depthbuffer to [0-1] depth, is fu like with no linearize
since the documentation says nothing about the expected format, I would appreciate any help
11 Replies
- thewhiteambitAdventurerIs the documentation still correct on this one?
"Every frame, all active layers are composited from back to front using pre-multiplied alpha blending. Layer 0 is the furthest layer, layer 1 is on top of it, and so on; there is no depth-buffer intersection testing of layers, even if a depth-buffer is supplied."
And if it is, why do I see depth buffer compositing then? - thewhiteambitAdventurer
- thewhiteambitAdventurerHere is clearly some depth compositing visible! If the documentation is correct, this is a miracle. I am not far from dropping the compositor and rendering the layers by myself, since API and documentation are to bad. If already the compositing is all wrong, how can I expect Async-Spacewarp to do correct things with the depthbuffer then/later? And what type of depthbuffer is Async-Spacewarp expecting? One that would drop out of the GPU Pipeline, or a transformed linear one usable for compositing? There is nothing in the documentation on this. You really hope everybody to be happy with Oculus VR spending money on yet another new Homescreen-Environment?
- thewhiteambitAdventurerForget it. Its been three weeks and I implemented my own layer-system as a workaround for your broken API now. Can't wait forever...
- DeanOfTheDriverProtegeDX-pipeline with projection matrix provided by the Oculus
You're should provide the projection matrix which represents the non-linear z used in your depth buffer. Have you tried this?
- thewhiteambitAdventurerOf course, that would be the obvious thing to do. This is also what my quotation tells you when you don't crop it:
thewhiteambit said:
when I copy from a regular w/z-depthbuffer [e.g. non-linear z] created by the DX-pipeline with projection matrix provided by the Oculus SDK (with none of the optional parameters), it is all fu with depth only in a very small area [depth-range].
But this gives only nonsense, as I told. What you see above is what happens when you transform it to a linear space - one that Oculus VR might have used for some reason in their Quad-Calculation in the compositior. This nearly fits but has some minor oddity.
Have you used this (from the documentation non existent) feature successfully, or were you just guessing? - thewhiteambitAdventurerIts been two months now, and not even Oculus seems to know about their depth buffer handling? Please find someone who knows and use some money to fix your documentation instead of presenting us yet another new homescreen environment!
- thewhiteambitAdventurer...three months and counting. Great support Oculus VR! You really have no one in your whole multi billion company capable of answering a simple question about the depth buffer used? No one in your company who even tried this, to write a consistent documentation about this or provide a sample?
This is ridiculous, please give us another new occurs-home-screen instead of providing documentation. We all desperately need this! - thewhiteambitAdventurer@imperativity @DeanOfTheDrivers
It really comes down to a few simple questions:
1) Is the depth buffer in the OVR-API already implemented?
2) If Yes, does work with the OVR-Compositor-Layers?
3) Are there samples using the depth buffers with Compositor-Layers in the OVR-API?
As nice as it is you support a wide audience of Unity and Unreal developers, you should really not neglect the documentation of the OVR-API. This should not be a forum question or require a bug reporting. I will try to use the bug tool to connect. But I don't even know if it is a bug, because this is more or less undocumented and contradictory. - eahProtege@thewhiteambit have you looked at the SDK OculusRoomTiny (DX11) sample, around line 325 it sets up a depth layer. If you walk up to the table and show Dash you'll see it's compositing correctly.
ovrTimewarpProjectionDesc posTimewarpProjectionDesc = ovrTimewarpProjectionDesc_FromProjection(proj, ovrProjection_None);// Initialize our single full screen Fov layer.ovrLayerEyeFovDepth ld = {};ld.Header.Type = ovrLayerType_EyeFovDepth;ld.Header.Flags = 0;ld.ProjectionDesc = posTimewarpProjectionDesc;for (int eye = 0; eye < 2; ++eye){ld.ColorTexture[eye] = pEyeRenderTexture[eye]->TextureChain;ld.DepthTexture[eye] = pEyeRenderTexture[eye]->DepthTextureChain;ld.Viewport[eye] = eyeRenderViewport[eye];ld.Fov[eye] = hmdDesc.DefaultEyeFov[eye];ld.RenderPose[eye] = EyeRenderPose[eye];ld.SensorSampleTime = sensorSampleTime;}
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device