Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
Slin's avatar
Slin
Expert Protege
12 years ago

Calculating the distortion shader parameters

Today I wanted to implement Rift support into our game engine http://rayne3d.com/ and I am having some trouble with calculating the parameters needed for the distortion shader.

The needed parameters and what I found out:

vec2 LensCenter; //x+(w+projshift)*0.5, y+h*0.5
vec2 ScreenCenter; //left view: x+w*0.5, y+h*0.5
vec2 Scale; // w*0.5/factor, h*0.5*aspect/factor
vec2 ScaleIn; // 2/w, 2/h/aspect
vec4 HmdWarpParam; //HMDInfo.DistortionK
vec4 ChromAbParam; //HMDInfo.ChromaAbCorrection

x = 0 for left and 0.5 for right view (views x coordinate in 0-1)
y = 0 (views y coordinate in 0-1)
w = 0.5 (views width in 0-1)
h = 1.0 (views height in 0-1)
projshift = 0.15197 for left and -0.15197 for right view (1.0f-2.0f*HMDInfo.LensSeparationDistance/HMDInfo.HScreenSize)
aspect = 0.8 (w/h)
factor = ?

(A list like this would have made this A LOT easier...)
This is what I got from the docs and samples and what seems to work for me, but if anyone could confirm me if this is correct that would be awesome.

My problem now is how to calculate the factor. Oculus recommends to scale to fit to the width. But if I try it like shown below, it gets too big.

float scaleradius = (-1.0-LensCenter.x);
float scalefactor = scaleradius*scaleradius;
scalefactor = HmdWarpParam.x + HmdWarpParam.y*scalefactor + HmdWarpParam.z*scalefactor*scalefactor + HmdWarpParam.w*scalefactor*scalefactor*scalefactor;

as you can see here:
Bildschirmfoto 2013-08-07 um 20.10.13 (2).png

The Util library does it different as it multiplies the scalefactor with the radius just to divide by it a bit later how does that make any sense?
What am I doing wrong and how do I fix it?

For better quality I should just scale my render target with the same 1/factor, right?
Can the FOV also just be scaled by that value?

Is there btw anything special with gamma correction (I just apply it after the distortion and color shifting, as it anyway just effect each pixels color channel individually)? So far it seems to look fine and everything, just wodering...

Thank you!

4 Replies

  • The scaling stuff is pretty hard to understand at first. Part of it is that it is working in a non-intuitive direction. Typically when creating a rendered window you set up your projection parameters to get a desired field of view based on the size and aspect ratio of the window.

    In the SDK code, it assumes that you want your rendered image to consume as much of the potential view as possible, so it starts from that basis and then uses that to work backwards to determine the scaling factor and the FOV values.


    The Util library does it different as it multiplies the scalefactor with the radius just to divide by it a bit later how does that make any sense?


    I'm not sure if you're referring to this line of code or something in the shader, or something else entirely. The code I've linked to here is actually finding the ratio between the distorted radius and the undistorted radius. If you scale something that goes from -1 to 1 by that value, and then apply the distortion to it, then you can be sure that the distorted value will also go from -1 to 1.

    I'm not certain what's wrong with your shader code. For one thing you're only including the application of the scaling factor, not the rest of it where you un-scale and re-apply the lens center transformation, or where you're using the scalefactor to lookup the color in the source texture. I can tell you that you need to be accounting for aspect ratios when computing the source texture Y coordinate as opposed to the X (or your scalefactor has to be a vec2 with the aspect ratio built in).

    I can also tell you that I fully expect the shader based distortion code to be dead soon. The scaling and distortion are all based on physical properties of the HMD and the lenses, not on any user setting. As others on the forums have discovered, that means it's far more efficient to precompute the distortion and store it in a texture. That way your fragment shader for applying the Rift distortion becomes a direct texture lookup which is much faster than doing all this math on a per-pixel basis, even if it is being done on the GPU.

    I'll probably be covering this in the near future on my blog, linked in my sig file.
  • Slin's avatar
    Slin
    Expert Protege
    Wow, thank you very much for this answer :)
    That line you linked is the one I was referring to and I messed up the distortion function in my mind, but actually did that part correctly. Still my result is wrong.
    Also you are right about the aspect, because somehow my image touches the top sooner than the left...

    I think that I do understand the shader and most calculations, but what I do not understand is the Scale and ScaleIn. So a good explanation for those might help me.

    And while I do see some performance improvements in storing the distortion in a texture it does not seem like it would have a serious impact in a modern rendering pipeline with >20 lights, shadows, HDRR and more. It also seems to have the disadvantage of less flexibility for the user and the need for another texture for every version of the hardware. So maybe as something the game generates whenever something changes, but that really does not seem to be worth the effort?

    Here is a little bit more of my code. I of course do not expect anyone to go through it in detail, bit if you can give it a quick look and already see my mistakes, please tell me :)
    Calculating the variables (I don´t like the code, but for now I just want it to work -.-):

    float eyedistance = 0.064;
    float screenwidth = 0.14976;
    float screenheight = 0.0936;
    float screendist = 0.041;
    float lensdist = 0.0635;
    RN::Vector4 hmdwarpparam(1.0f, 0.22f, 0.24f, 0.0f);
    RN::Vector4 chromabparam(0.996f, -0.004f, 1.014, 0.0f);

    if(app->riftconnected)
    {
    eyedistance = app->riftinfo.InterpupillaryDistance;
    screenwidth = app->riftinfo.HScreenSize;
    screenheight = app->riftinfo.VScreenSize;
    screendist = app->riftinfo.EyeToScreenDistance;
    lensdist = app->riftinfo.LensSeparationDistance;

    hmdwarpparam.x = app->riftinfo.DistortionK[0];
    hmdwarpparam.y = app->riftinfo.DistortionK[1];
    hmdwarpparam.z = app->riftinfo.DistortionK[2];
    hmdwarpparam.w = app->riftinfo.DistortionK[3];

    chromabparam.x = app->riftinfo.ChromaAbCorrection[0];
    chromabparam.y = app->riftinfo.ChromaAbCorrection[1];
    chromabparam.z = app->riftinfo.ChromaAbCorrection[2];
    chromabparam.w = app->riftinfo.ChromaAbCorrection[3];
    }


    _projshift = 1.0f-2.0f*lensdist/screenwidth;
    _eyeshift = eyedistance*0.5f;

    RN::Vector2 left_lenscenter = RN::Vector2(0.25f+_projshift*0.5f, 0.5f);
    RN::Vector2 left_screencenter = RN::Vector2(0.25f, 0.5f);

    RN::Vector2 right_lenscenter = RN::Vector2(0.75f-_projshift*0.5f, 0.5f);
    RN::Vector2 right_screencenter = RN::Vector2(0.75f, 0.5f);

    float lensradius = fabsf(-1.0f-left_lenscenter.x);
    float lensradsq = lensradius*lensradius;
    _scalefac = hmdwarpparam.x+hmdwarpparam.y*lensradsq+hmdwarpparam.z*lensradsq*lensradsq+hmdwarpparam.w*lensradsq*lensradsq*lensradsq;

    _riftfov = 2.0f*atan(screenheight*_scalefac/(2.0f*screendist))*180.0f/RN::k::Pi;
    float aspect = screenwidth*0.5f/screenheight;

    RN::Vector2 scale = RN::Vector2(0.25f, 0.5f*aspect)/_scalefac;
    RN::Vector2 scalein = RN::Vector2(4.0f, 2.0f/aspect);


    The shader (I just copied it from the samples, fixed the version issues and variable names):

    #version 150
    precision highp float;

    uniform sampler2D targetmap0;

    in vec2 vertTexcoord;
    out vec4 fragColor0;

    //Rift
    uniform vec2 LensCenter;
    uniform vec2 ScreenCenter;
    uniform vec2 Scale;
    uniform vec2 ScaleIn;
    uniform vec4 HmdWarpParam;
    uniform vec4 ChromAbParam;

    void main()
    {
    vec2 theta = (vertTexcoord - LensCenter) * ScaleIn; // Scales to [-1, 1]
    float rSq = theta.x * theta.x + theta.y * theta.y;
    vec2 theta1 = theta * (HmdWarpParam.x + HmdWarpParam.y * rSq + HmdWarpParam.z * rSq * rSq + HmdWarpParam.w * rSq * rSq * rSq);

    // Detect whether blue texture coordinates are out of range since these will scaled out the furthest.
    vec2 thetaBlue = theta1 * (ChromAbParam.z + ChromAbParam.w * rSq);
    vec2 tcBlue = LensCenter + Scale * thetaBlue;
    if (!all(equal(clamp(tcBlue, ScreenCenter-vec2(0.25,0.5), ScreenCenter+vec2(0.25,0.5)), tcBlue)))
    {
    fragColor0 = vec4(1.0);
    return;
    }

    // Now do blue texture lookup.
    float blue = texture(targetmap0, tcBlue).b;

    // Do green lookup (no scaling).
    vec2 tcGreen = LensCenter + Scale * theta1;
    vec4 center = texture(targetmap0, tcGreen);

    // Do red scale and lookup.
    vec2 thetaRed = theta1 * (ChromAbParam.x + ChromAbParam.y * rSq);
    vec2 tcRed = LensCenter + Scale * thetaRed;
    float red = texture(targetmap0, tcRed).r;

    vec3 color0 = vec3(red, center.g, blue);

    fragColor0.rgb = color0;
    fragColor0.a = center.a;
    }


    This is my result with exactly the code above:
    Bildschirmfoto 2013-08-08 um 03.12.33.png
  • Slin's avatar
    Slin
    Expert Protege
    I kinda fixed it now, so here is my updated list:

    vec2 LensCenter; //x+(w+projshift*0.5)*0.5, y+h*0.5
    vec2 ScreenCenter; //left view: x+w*0.5, y+h*0.5
    vec2 Scale; // w*0.5/factor, h*0.5*aspect/factor
    vec2 ScaleIn; // 2/w, 2/h/aspect
    vec4 HmdWarpParam; //HMDInfo.DistortionK
    vec4 ChromAbParam; //HMDInfo.ChromaAbCorrection

    x = 0 for left and 0.5 for right view (views x coordinate in 0-1)
    y = 0 (views y coordinate in 0-1)
    w = 0.5 (views width in 0-1)
    h = 1.0 (views height in 0-1)
    projshift = 0.15197 for left and -0.15197 for right view (1.0f-2.0f*HMDInfo.LensSeparationDistance/HMDInfo.HScreenSize)
    aspect = 0.8 (w/h)


    float lensradius = -1.0f-_projshift;
    float lensradsq = lensradius*lensradius;
    factor = hmdwarpparam.x+hmdwarpparam.y*lensradsq+hmdwarpparam.z*lensradsq*lensradsq+hmdwarpparam.w*lensradsq*lensradsq*lensradsq


    My result:
    Bildschirmfoto 2013-08-08 um 15.32.40 (2).png

    The fov seems to be a bit off, but other than that it looks and feels fine :)