Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
obzen's avatar
obzen
Expert Protege
11 years ago

Optimising for raycasting with the oculus

Heyup,

I want to open a discussion on modeling the physical characteristics of the Oculus device and the eyes for ray-tasting / ray-tracing / ray-marching.

1) Would a one-to-one representation of the device configuration be desirable, even feasible? e.g. Modeling the lens characteristics, its position from the eye (taking into account the eye relief mechanism, lens type), the position and size of the screen...

2) ignoring the technical limitations of eye tracking, would it be possible to modeling the actual eye itself. Pupil, cornea, iris, ... modeling ambient luminosity, focal distance, eye position and direction, even eye conditions, myopia, hyperopia, presbyopia.

3) what shortcuts and optimisations could be taken : I'm thinking a reverse barrel-distortion-type shader.

4) How best to represent the distortion to generate ray. Thinking displacement map / normal map type of offset computation. Then the rays for each fragment can be computed quickly.

5) problem : But how would you take into account chromatic aberrations?



Why would this be useful?

1) Ray tracing is all about realism. There may be some benefits modeling the real physical properties of a HMD device and eye accurately.

2) Removes the need for high-resolution frame buffers, even reduce the computations to only the visible pixels on the screen. Ray tracing scales with pixel count. The less rays required, the better the render time.

3) Relative simplification. All that is required is the physical properties of the device (screen, optics configuration, focusing mechanisms), physical properties of the eye (position, biological characteristics).

9 Replies

  • owenwp's avatar
    owenwp
    Expert Protege
    I would very much like to see a function added to the oculus sdk that takes in a screen UV coordinate and returns a ray, taking into account both position and orientation of the HMD, in the space of either the DK2 camera or the initialized DK1 orientation. Or optionally three rays for chromatic aberration.

    I have been thinking about making a haskell path tracer as a learning exercise and want to make a physically correct rift implementation.
  • obzen's avatar
    obzen
    Expert Protege
    That would be pretty cool. Ray direction, and ray position as well (where the pixel is, basically).

    Not being a graphics expert, I wonder how that information could be passed to fragment shaders efficiently (which is what I'm interested ATM with my shadertoy and ray marching experiments). I would assume a texture sampler of some kind.
  • This information is actually available in the ovrDistortionMesh structure. They're not well-labelled (I will fix this), but "TexG" holds the real-world vector data you want. If you take its x and y components, then the vector (x,y,1).Normalize() is the vector you should shoot a ray out to shade that pixel.

    Annoyingly, the vectors are different for the red, green and blue components because of chromatic aberration. Currently the best way to do this for raytracing is to trace the rays using the TexG values, write the RGB results into a texture, and then use a post-processing shader to apply the small chromatic offsets to the R and B channels. There are other threads about this here: viewtopic.php?f=33&t=965&start=140 and here: viewtopic.php?f=20&t=8958

    would it be possible to modeling the actual eye itself.

    You don't really need to do this, because you already have a perfect simulation of the user's eye - it's the user's eye! Unless you want to model somebody else's eye for some reason? All you have to do is deliver the same input light as the real world. Which is good - it's a much simpler problem than simulating an eye.

    1) Ray tracing is all about realism.

    All rendering is a hack. Raytracing is just a hack along a different axis to rasterising.
  • obzen's avatar
    obzen
    Expert Protege
    I'll take a look at all this :)


    You don't really need to do this, because you already have a perfect simulation of the user's eye - it's the user's eye! Unless you want to model somebody else's eye for some reason? All you have to do is deliver the same input light as the real world. Which is good - it's a much simpler problem than simulating an eye.


    But correcting the eye (basically simulate glasses)? My eyes are pretty bad!

    All rendering is a hack. Raytracing is just a hack along a different axis to rasterising.


    On the scale or realistically modelling the world we see, it's a hella lot closer than standard rasterising techniques. That's if you push real time radiosity, shadows, light scattering, ect... But I digress.
  • I also think ray-tracing the path of light in the HMD and simulating the properties of the eye could help define a better distortion calculation. I'm working on this at the moment but it's difficult to get anything meaningful without knowing the exact characteristics of the lenses.

    I've tried to contact UltraOptix which I think are the manufacturers of the lenses used in the DK1 but I didn't get an answer. For now I'm best-guessing sensible values for a 7X aspheric lens (and other lenses) and I've implemented a ray-tracer showing the path of the light.

    It's only 2D and used for FOV calculations for now but my original goal was to extend that to 3D to simulate what the eye can see through the lenses (probably using the Liou-Brennan eye model which seems to be the most accurate). For now my calculations use the Wiley 2008 eye model and only take into account the nodal point as the center of projection (all the other sizes are physically correct but not used in the calculations).

    Ideally a model of the light emission from the display and the loss of light through refraction and reflection should be modeled as well to get something really close to reality and correct other aspects of the distortion. Varying illumination for example, which seems to happen according to John Carmack (if optics are really the cause of this problem).

    Don't hold your breath though, I've been on this for months and it's not really simple. In the end I'm not even sure it's worth it but I had to give it a go.

    Examples :

    Aspheric-7Xa.png
    Aspheric-7X-vertical.png
    Collimated rays (can't attach more than 2 images so not inline) :


    GUI :
  • rjoyce's avatar
    rjoyce
    Honored Guest
    Did you write that ray tracing program yourself? It looks pretty nifty. Any chance you would be able to share it :D
  • Yes I wrote it myself. I've no plan to share it for now, or at least not to anyone.

    EDIT: but I'll share the results I come with for sure, if any...
  • obzen's avatar
    obzen
    Expert Protege
    returning to my experiments...

    Going back to what Tom says, It seems that I can use the distortion mesh itself to compute the ray information going through each fragment.

    looking like so...

    C++


    // have Oculus create the distortion mesh for us.
    ovrHmd_CreateDistortionMesh(hmd, eyeType, hmdDesc.DefaultEyeFov[eyeType], distortionCaps, &m_meshData);

    // render the mesh somewhat.
    renderDistortionMesh(m_meshData);



    Vertex Shader


    #version 330

    varying vec3 rayOrigin;
    varying vec3 rayDirection;

    void main(void)
    {
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    rayDirection = gl_NormalMatrix * normalise(gl_Vertex); // not sure about this, other than as a principle...
    rayOrigin = gl_Position ;
    }



    Fragment Shader


    // fragment shader
    #version 330

    varying vec3 rayOrigin;
    varying vec3 rayDirection;

    vec4 rayMarch(in vec3 from, in vec3 dir)
    {
    // ray-march something...
    }

    void main(void)
    {
    gl_FragColor = rayMarch(rayOrigin, rayDirection);
    }


    Simple.... Maybe too simple!