Forum Discussion
AbdulVR
11 years agoHonored Guest
Distortion mesh gaps
Hello,
I was wondering what kind of voodoo I need to apply to the distortion mesh coordinates to get it to fill the entire visible area.
This is what I have so far:
http://i61.tinypic.com/350a104.png
The only special thing I am doing per eye is remapping the distortion mesh NDC coordinates to the full [-1, 1] range.
Abdul
I was wondering what kind of voodoo I need to apply to the distortion mesh coordinates to get it to fill the entire visible area.
This is what I have so far:
http://i61.tinypic.com/350a104.png
The only special thing I am doing per eye is remapping the distortion mesh NDC coordinates to the full [-1, 1] range.
Abdul
19 Replies
- lamour42Expert Protege
- AbdulVRHonored GuestHey Lam,
Thank you for replying, I appreciate it :)
Can you walk me a little bit through the process you're using to render those meshes? How did you convert the ndc coordinates to screen?
Thanks again,
Abdul - AbdulVRHonored GuestHere's what I am doing atm in the vertex shader which doesn't seem to work:
//--------------------------------------------------------------------------------------------------
PSI_Norm VS_DistortionMeshGBuffer( in float2 ndc_pos : POSITION)
{
PSI_Norm output = (PSI_Norm)0;
// Fetch some constants
float4 misc = g_GenericConsts[0];
// Position in clip space
output.m_Position.x = ndc_pos.x * misc.x + misc.y;
output.m_Position.y = ndc_pos.y;
output.m_Position.z = misc.z;
output.m_Position.w = 1.0f;
// Normal
output.m_Normal = g_VP_ViewToWorldMat[2].xyz * -1.0f;
// Done
return output;
}
Where misc = { 2.0f, +/-1.0f, depth, N/A}
Abdul - cyberealityGrand ChampionHonestly, I would not mess with the distortion mesh. The distortion has been carefully created to closely match the physical parameters of the headset and the user over the course of many months. If you change this, you will likely make it inaccurate and possibly cause discomfort for users. I would not recommend trying it.
- AbdulVRHonored GuestHey cyber,
I am not trying to mess with the mesh, I am actually using it to mask the dead region and save some performance.
I am clearing my depth buffer to 1.0 - kOriginalDepthClearValue and then I am rendering the mesh with depth set to kOriginalDepthClearValue.
This trick allows for fast hardware rejection of pretty much anything in the dead zone including fill-rate hogs such as blended visual effects.
Abdul - lamour42Expert ProtegeHi,
Using ovrHmd_CreateDistortionMesh() gives nice screen coordinates already in the range -1,-1 to 1,1. So it is pretty much maxed out already. Then I just use the proposed Vertex and Pixel Shaders from the SDK Dev Guide (only reformatted to use structs instead of plain argument lists).
I didn't really get what you were trying to achieve with your depth buffer trick. Depth Buffer should be disabled completely for distortion rendering. Enabling it would explain that the distortion mesh is not the size you expect - depending on the z value you used in the shaders.
IMO nothing is ever drawn to the dead zone anyway. So there is no performance to gain from trying to optimize dead zone rendering: First you render without distortion to a rectangular area. Then - using the distortion mesh and shaders - you map that rectangular area (all of it) to the visible area. The invisible black areas will not be drawn.
Lam - AbdulVRHonored GuestHey Lam,
Thank you for pointing me in the right direction, much appreciated.
There was a recent Valve paper talking about the dead-zone when rendering to VR headset: basically not all of your pre-distorted buffer is visible once distortion is applied and the final content is presented.
My trick with the depth buffer prevents anything in the invisible region from rendering and consuming GPU cycles.
It might be that I am over-culling atm (gaps and all) but with DK2 I am getting +5% performance back and somewhere in that region with CB.
Abdul - AbdulVRHonored Guest
- cyberealityGrand ChampionInternally, we are still doing tests around this technique, but initial experiments did not yield much performance gain.
- I've been planning on trying out the same technique, but with the stencil buffer instead of the depth buffer.
At 1080p for the pre-distortion buffer it might not have much of a saving, but at the higher recommended res the saving would increase.
The shape used as the blocking volume isn't based on the positions of the distortion mesh. Instead you'd use the uv coords of the distortion mesh (probably the blue channel, since iirc it stretches further due to chromatic aberration) as positions of the blocking mesh. That should find the region of the pre-distortion eye buffer that is actually visible post distortion.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 1 year ago
- 9 months ago
- 11 months ago
- 8 months ago