cancel
Showing results for 
Search instead for 
Did you mean: 

Suggestion: Apply lens offset only in warp shader

Pyry
Honored Guest
The idea here is to apply the lens offset just in the warp shader rather than modifying the projection matrix to have an off-center projection. Since the lens offset just shifts the image around anyway, the warp shader can be easily modified to do the shifting and warping simultaneously by just sampling differently from the input (unwarped) texture.

The main benefit is that then you can leave the camera projection matrices as standard projections, which can be really convenient if you're using an engine that doesn't easily allow direct manipulation of the camera projection matrices. As a secondary benefit, the documentation also gets simpler. (Edit: Also, carelessly modifying the projection matrix can break a lot of things, including view frustum culling and potentially some screen-space effects).

The only real downside is that debugging gets a bit harder, because you can't view the unwarped images in the rift (because they no longer have the critical lens offset). And theoretically you're wasting some screen area because you're rendering too much towards your nose and not enough in your periphery, but I can't see that far out anyway so that screen region is wasted for me no matter what.

Using the GLSL shader from the other thread, the modified shader looks like this (we only need to change LensCenter to ScreenCenter in one line):

void main()
{
vec2 uv = texCoords.xy;
vec2 theta = (uv - LensCenter.xy) * ScaleIn.xy;
float rSq = theta.x * theta.x + theta.y * theta.y;
vec2 rvector = theta * (HmdWarpParam.x + HmdWarpParam.y * rSq +
HmdWarpParam.z * rSq * rSq + HmdWarpParam.w * rSq * rSq * rSq);

// Note: In the unmodified shader this is LensCenter.xy + Scale.xy * rvector
// Since we do not shift the projection center in the projection matrix, we use the
// screen center instead
vec2 tc = (ScreenCenter.xy + Scale.xy * rvector);

//*** Adjust 0.5,0.5 according to how you render
if (any(bvec2(clamp(tc, ScreenCenter.xy-vec2(0.50,0.5), ScreenCenter.xy+vec2(0.50,0.5))-tc)))
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
else
gl_FragColor = texture2D(albedoMap, tc);
}
6 REPLIES 6

geekmaster
Protege
Thanks a LOT for the GLSL shader code!

I have been rendering (since I created the "PTZ Tweening" thread at MTBS3D) to a larger framebuffer (for each eye) with spare pixels all the way around (including on the inner edges near the nose). I did this with eventual plans that my GLSL shader would do the pan/tilt/zoom and warp at a high framerate, even if it only had a stale framebuffer to work with. And if the head tracker strays beyond the framebuffer, I would just substitute black pixels (like in my GMsphere program). Even if that breaks immersion in the GAME, you would still feel like you were in a domed theater that was projecting the game all around you (still a very immersive experience).

If a frame is not ready to send to the GPU at the next VSYNC, at least I can still send it head tracker coordinates, or it can do motion prediction lacking that, to continue following the head trajectory until it does get some valid data to render.

More details in the "PTZ Tweening" thread (and other diverse posts) here and there.

So, yes, as suggested in my past posts, I plan to do exactly as you suggest, except preceding that by "PTZ Tweening" to create otherwise missing (but sufficient for immersion) frames.

xix
Honored Guest
Yup, I'm trying some similar sort of things.

In fact I'm taking the script kiddie approach of just tweak numbers and shit and then see what works 🙂 I'm not sure that coming at this from the "just copy reality" direction is the right way.

So may end up with something simple, or may not, I'm just toying around with things and taking a subjective opinion on if its better or worse.

Personally I'm not sure the warping is actually all that important, the stuff ahead is mostly flat and the bits to the side that the warping fixes are less about detail and more about movement. So the distortion probably doesn't have to be that perfect and in fact blurring stuff at the edges rather than distorting may even be a better idea.

One thing I did notice is if you do not black out the edges of your view then you can "see" much wider than you think you can, bright objects coming into view even on the furthest pixels are really noticeable. Implying that covering the entire view with validish pixels makes for more immersion than focusing on a couple of holes ahead of you.

Think in terms of say gunfire coming from the far side of your vision, noticing the flash even though you cant see any detail and then turning to look vs being hit and not having a clue where from because of your goggle vision.

So important perhaps that If I was to make an FPS I would consider faking that sort of edge flashing all the time.

So in summary I have no idea where I will end up but I'm pretty certain that things can be improved upon, possibly in strange ways (or days).

geekmaster
Protege
"xix" wrote:
... In fact I'm taking the script kiddie approach of just tweak numbers and shit and then see what works 🙂 I'm not sure that coming at this from the "just copy reality" direction is the right way. ... Personally I'm not sure the warping is actually all that important, the stuff ahead is mostly flat and the bits to the side that the warping fixes are less about detail and more about movement. So the distortion probably doesn't have to be that perfect and in fact blurring stuff at the edges rather than distorting may even be a better idea...

I also like to practice your "hands on" approach to tweaking things in various ways and sometimes discovering strange and interesting side-effects (to be saved and used in later demos that need that sort of thing). Sometimes algorithms break in ways that create all new "eye-candy"...
😄 (Except in my case, I consider myself an "experimental investigator" rather than a "script kiddie"). :lol:

And I agree with the lack of needing pre-warp. With no warp, but just a straight blit from desktop to Rift DK screen, it looks very much like a curved screen wrapped around you, which is very immersive by itself, and even better when fisheye content is projected on it.

And regarding "Real Life", my eyeglasses warp the edges of my FoV in different ways depending on which pair I am wearing. New glasses always have an adjustment period due to DIFFERENT radial distortion at the edges. So using a the Rift DK is with no pre-warp is a lot like putting on a different pair of eyeglasses. You get used to it, and to me the pixels look better sharp and clear on a curved virtual surface than the due warped onto a flat virtual surface (ESPECIALLY for text, such as from desktop windows, as viewed in GMsphere or Deskope).

You may need pre-warp when trying to accurately model reality, but there are plenty of apps that can use a curved projections surface and do not need pre-warp distortion. In fact, the less added distortion the better...

Pyry
Honored Guest
By the way, if you do it this way (lens offset and warp at the same time), you might be confused because the bulging rectangle is no longer asymmetric, but it will still look right in the rift.

offset_warp.jpg

mrotondo
Honored Guest
Pyry, thank you so much for this! I'm in exactly the situation you described, using a newish rendering framework that doesn't support fully customizable projection matrices. This is exactly what I needed, my world converges now!

Archy
Honored Guest
I also use this method as I am not able to modify some core engine code. The main downside is the part of the rendered image that is thrown away. When doing the cull space shift the full image end's up on screen and you can potentially use a smaller resolution -> improve performance.