cancel
Showing results for 
Search instead for 
Did you mean: 

XNA Oculus Rift integration: wrapper & step-by-step

pmarques
Honored Guest
Hello,

After some days messing around and failing with all XNA and C# wrappers for the Oculus Rift I could find, I finally got my game working on the rift by using a code sample available here: http://bettercoderwannabe.blogspot.ch/2013/08/oculus-ar-drone-and-kinect-completed.html.

As there is no documentation for the code sample and I had to change some glue code, I'm providing all the needed code and a step-by-step tutorial.

Disclaimer: I'm in no way an expert in XNA or 3D graphics, and the wrapper is not complete. I'll try to describe what I did the best I can and this will get you minimal rift support, but it's in no way perfect.

If you want to understand what is happening, follow along with the official Oculus SDK documentation. I will indicate the section number.


  • Get the Oculus Rift project from here: https://mega.co.nz/#!s4hh2BaS!J29JtBYxEfTJj_0LUl7_ICyw1xFvDoc49KYisE3Tsxk

  • Import the project to your solution

  • On your game project, create a reference to the Oculus Rift project


  • Headtracking (Documentation: 5.3)

  • First, import the OculusRift.Oculus namespace:
    using OculusRift.Oculus;

    Since the methods we'll be using for headtracking are static, we can use the OculusClient class directly (defined in OculusClient.cs). In my game, a cameraRotation matrix is used as an argument to create the viewMatrix (using Matrix.CreateLookAt), so I just needed to do this:
    cameraRotation = Matrix.CreateFromQuaternion(OculusClient.GetPredictedOrientation());

    Congratulations, you have headtracking working!
    If the camera is fixed to another object (a body, for example), you just need to multiply the matrices:
    cameraRotation = Matrix.CreateFromQuaternion(OculusClient.GetPredictedOrientation()) * bodyWorldMatrix;

    With the body rotation controlled by a controller or keyboard and the camera rotation controlled by the rift, we now have independent look and movement.

  • Stereo Rendering and Distortion Correction (Documentation: 5.5.1 and 5.5.2)

  • This is where things get a little hairy. The documentation is scary and makes it look like you need a PhD to implement this, but it's not so hard as it seems.

    First, create the properties we'll need:
    OculusClient oculusClient;
    Effect oculusRiftDistortionShader;
    RenderTarget2D renderTargetLeft;
    RenderTarget2D renderTargetRight;
    Texture2D renderTextureLeft;
    Texture2D renderTextureRight;
    float scaleImageFactor;
    float fov_x;
    float fov_d;
    int IPD = 0;
    public static float aspectRatio;
    float yfov;
    float viewCenter;
    float eyeProjectionShift;
    float projectionCenterOffset;
    Matrix projCenter;
    Matrix projLeft;
    Matrix projRight;
    Matrix viewLeft;
    Matrix viewRight;
    float halfIPD;

    We create everything up front because we never want to allocate memory (and therefore create garbage) inside our game loop.

    In your initialization method, instantiate a new oculusClient object and define scaleImageFactor:

    oculusClient = new OculusClient();
    scaleImageFactor = 0.71f;

    We will be scaling the image so that we take advantage of the rift FoV. See documentation, 5.5.3.
    I couldn't get the documentation formula to work and found the 0.71 value here on these forums and it seems to work well.
    Basically, the distortion shader pushes pixels to the center of the screen, so we will be drawing to a back buffer bigger than the screen, so that when pixels are pushed there are more pixels off-screen to replace them. Don't forget to change your backbuffer size accordingly:

    graphics.PreferredBackBufferWidth = (int)Math.Ceiling(Settings.resolutionX / scaleImageFactor);
    graphics.PreferredBackBufferHeight = (int)Math.Ceiling(Settings.resolutionY / scaleImageFactor);


    Next we're going to load the shader, define the aspect ratio and field of view, create a camera using these values, prepare the renderTargets and set the prediction time to 30ms. You can get the shader here: https://mega.co.nz/#!E4YkjJ6K!MuIDuB78NwgHsGgeONikDAT_OLJQ0ZeLXbfGF1OAhzw

    oculusRiftDistortionShader = Content.Load<Effect>("Shaders/OculusRift");
    aspectRatio = (float)(OculusClient.GetScreenResolution().X * 0.5f / (float)(OculusClient.GetScreenResolution().Y));
    fov_d = OculusClient.GetEyeToScreenDistance();
    fov_x = OculusClient.GetScreenSize().Y * scaleImageFactor;
    yfov = 2.0f * (float)Math.Atan(fov_x / fov_d);
    camera = new Camera(yfov, aspectRatio);
    renderTargetLeft = new RenderTarget2D(GraphicsDevice, graphics.PreferredBackBufferWidth / 2, graphics.PreferredBackBufferHeight);
    renderTargetRight = new RenderTarget2D(GraphicsDevice, graphics.PreferredBackBufferWidth / 2, graphics.PreferredBackBufferHeight);
    OculusClient.SetSensorPredictionTime(0, 0.03f);
    UpdateResolutionAndRenderTargets();

    The UpdateResolutionAndRenderTargets method just sets the size of the sprites:

    private int viewportWidth;
    private int viewportHeight;
    private Microsoft.Xna.Framework.Rectangle sideBySideLeftSpriteSize;
    private Microsoft.Xna.Framework.Rectangle sideBySideRightSpriteSize;
    private void UpdateResolutionAndRenderTargets()
    {
    if (viewportWidth != GraphicsDevice.Viewport.Width || viewportHeight != GraphicsDevice.Viewport.Height)
    {
    viewportWidth = GraphicsDevice.Viewport.Width;
    viewportHeight = GraphicsDevice.Viewport.Height;
    sideBySideLeftSpriteSize = new Microsoft.Xna.Framework.Rectangle(0, 0, viewportWidth / 2, viewportHeight);
    sideBySideRightSpriteSize = new Microsoft.Xna.Framework.Rectangle(viewportWidth / 2, 0, viewportWidth / 2, viewportHeight);
    }
    }

    Now let's make magic happen! This happens on your Draw method.
    First, let's calculate the projection center offset:

    viewCenter = OculusClient.GetScreenSize().X * 0.25f;
    eyeProjectionShift = viewCenter - OculusClient.GetLensSeparationDistance() * 0.5f;
    projectionCenterOffset = 4.0f * eyeProjectionShift / OculusClient.GetScreenSize().X;

    Then, we prepare the projection matrices:

    projCenter = camera.projectionMatrix;
    projLeft = Matrix.CreateTranslation(projectionCenterOffset, 0, 0) * projCenter;
    projRight = Matrix.CreateTranslation(-projectionCenterOffset, 0, 0) * projCenter;

    And finally, the view matrices:

    halfIPD = OculusClient.GetInterpupillaryDistance() * 0.5f;
    viewLeft = Matrix.CreateTranslation(halfIPD, 0, 0) * camera.viewMatrix;
    viewRight = Matrix.CreateTranslation(-halfIPD, 0, 0) * camera.viewMatrix;

    Attention: The IPD must be translated to world units. The Rift reports it in meters. If in your game one unit = one meter, you're fine. Otherwise your must convert the IPD to the equivalent value in world units.

    With this set, we are ready to draw. First, we draw to the left render target:

    GraphicsDevice.SetRenderTarget(renderTargetLeft);
    GraphicsDevice.Clear(Microsoft.Xna.Framework.Color.Black);
    View = viewLeft;
    Projection = projLeft;
    //Here we draw everything

    Then, the right render target:

    GraphicsDevice.SetRenderTarget(renderTargetRight);
    GraphicsDevice.Clear(Microsoft.Xna.Framework.Color.Black);
    View = viewRight;
    Projection = projRight;
    //Here we draw everything AGAIN

    And finally, we draw both render targets to the backbuffer, using the distortion shader:

    GraphicsDevice.SetRenderTarget(null);
    renderTextureLeft = (Texture2D)renderTargetLeft;
    renderTextureRight = (Texture2D)renderTargetRight;
    GraphicsDevice.Clear(Microsoft.Xna.Framework.Color.Black);

    //Set the four Distortion params of the oculus
    oculusRiftDistortionShader.Parameters["distK0"].SetValue(oculusClient.DistK0);
    oculusRiftDistortionShader.Parameters["distK1"].SetValue(oculusClient.DistK1);
    oculusRiftDistortionShader.Parameters["distK2"].SetValue(oculusClient.DistK2);
    oculusRiftDistortionShader.Parameters["distK3"].SetValue(oculusClient.DistK3);
    oculusRiftDistortionShader.Parameters["imageScaleFactor"].SetValue(scaleImageFactor);

    oculusRiftDistortionShader.Parameters["drawLeftLens"].SetValue(true);
    spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null, oculusRiftDistortionShader);
    spriteBatch.Draw(renderTextureLeft, sideBySideLeftSpriteSize, Microsoft.Xna.Framework.Color.White);
    spriteBatch.End();

    oculusRiftDistortionShader.Parameters["drawLeftLens"].SetValue(false);
    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, null, null, null, oculusRiftDistortionShader);
    spriteBatch.Draw(renderTextureRight, sideBySideRightSpriteSize, Microsoft.Xna.Framework.Color.White);
    spriteBatch.End();


  • Drift Correction

  • If you implemented independent look and movement, your game will suffer from drift. The simplest way to get around this is re-orienting the sensor:

    bool resetOk;
    if ((currentState.Buttons.X == ButtonState.Pressed && previousState.Buttons.X != ButtonState.Pressed))
    {
    resetOk = OculusClient.ResetSensorOrientation(0);
    }

  • What else?

  • This is all I have implemented. A lot of stuff is missing - I'm not checking if the device is connected, some values are hardcoded, there is no neck or head model, no chromatic aberration correction.. there's a lot of functionality mentioned on the SDK documentation that I can't use and take advantage of - but hey, at least I can play my game with headtracking and distortion correction.


Good luck and please share your findings!
13 REPLIES 13

Anonymous
Not applicable
Since the dawn of time (okay, well the dawn of OR), I've been waiting for this. This is so awesome, thank you! ^_^
What I'll be making, I have no clue, but I will be making something 😄

Thanks
Keta

Ianthraxx
Honored Guest
For other users like Borstenhorst who are following this tutorial step-by-step and can't get any head-tracking information:

At the point where pmarques says "Congratulations, you have headtracking working!" you have to have, at some point in your code, instantiated an OculusClient (ie. "oculusClient = new OculusClient();")

Once you do this, you'll start getting head tracking information.

Huge props to pmarques for putting all this kick-assery together! Works like a dream!

Zarkow
Honored Guest
Would be very interested in seeing a 0.4 update for this. 🙂

jsmars
Honored Guest
I agree, I'm still waiting for my DK2 which should arrive any day now. Is anyone using XNA for VR development? I'd really like to see a working 0.4 XNA example.