Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
Veesus's avatar
Veesus
Honored Guest
9 years ago

Orthographic Projection

Hi Everyone,

Apologies if this has been asked before, but I'm struggling to find an answer and Google isn't throwing much up!

We already have a stereo rendering path in our code which we are reusing for the Oculus. This means we end up with a texture for each eye with the correctly separated image drawn on it.

From the Oculus SDK we have a single texture that we glViewport/glScissor as appropriate for each eye and draw the textures from stage 1 into the Oculus texture.

On the headset we get the correct images in each eye piece, however as expected they don't align properly as they've not had a correct projection matrix applied to them to ensure alignment.

Our current texture quad renderer does a ModelView matrix -> Identity and a ProjectionMatrix -> Identity followed by glOrtho(0,1,1,0,-1,1).

My guess is we need to replace the glOrtho call with an ortho matrix supplied by the Oculus SDK. Indeed there is even a function for this ovrMatrix4f_OrthoSubProjection.

However no matter what I do with the matrix supplied by this method I can't get anything rendered in to the Oculus.. so I'm a bit stumped how to use this matrix. I've tried loading it, I've tried multiplying the current glOrtho matrix all to no avail.

Does anybody have an example of how to use this matrix to correctly render 2D full screen textures to each eye piece of the Oculus?

Or have I over thought this and missed something obvious??

Many Thanks

Mark

4 Replies

  • Veesus's avatar
    Veesus
    Honored Guest
    Ok.. some progress has been made.

    I was on the right path, however the texture quad was being drawn to small to be seen. Once I changed the orthoScale values to (1,1) then I could see something. My Quad drawing calls had to be altered though to something like glVertex2f(-1,0.8) to compensate for the height of the texture. This gives a result which is almost right, but some distortion round the fringes.

    The documentation says I should be supplying (1/pixelsPerTanAngleAtCenter.x,a/pixelsPerTanAngleAtCenter.y) as a scale value, not (1,1). 1/pixelsPerTanAngleAtCenter ends up being something like 730 on the CV1. I get the impression that if I do this I will need to supply the texture pixel width/height to glVertex2d for my quad rather than the usual -1,1... but it's not clear...

    I guess I'm just looking for some advice on the best way to approach these values to get a nicely rendered full screen quad per eye!

    Many Thanks

    Mark

  • Anonymous's avatar
    Anonymous
    the DirectX Ortho matrices use pixel coordinates not world space coordinates with the center being 0,0 so with a resolution of say 800 X 600 the top left corner is -400,300 I can only speculate that this system uses similar metrics.
  • Veesus's avatar
    Veesus
    Honored Guest


    the DirectX Ortho matrices use pixel coordinates not world space coordinates with the center being 0,0 so with a resolution of say 800 X 600 the top left corner is -400,300 I can only speculate that this system uses similar metrics.


    Gold star this man.

    This has indeed got things working sensibly now without any distortion. I did however have to set the scale to 1.1/pixelsPerTanAngleAtCenter otherwise the image only just filled the goggles and if you moved them slightly you could see a discernible straight black edge to the left and right of the view.

    Thank you!

    Mark
  • galopin's avatar
    galopin
    Heroic Explorer
    @Veesus

    You should render your 2D stuff in a separate surface and use the layers to composite if possible, it allows to retain a better texel density for fonts and stuff even if you reduce the 3d scene resolution to achieve performance. And you only have to render the 2D stuff once and not per eye that way.