cancel
Showing results for 
Search instead for 
Did you mean: 

Analizing the VRTemplate, where and what ?

ox
Honored Guest
( still Giles here )

Ok .. I am analyzing/tring to understand the VR template .. ( wondering if I am missing some docs somewhere ??? )

It seems to me there's this class "app" that seems to "initialize a world of stuff".

And it seems to me that "Most" boils down to "Frame" which calls this "DrawEyeView ( eye, fov ) " ..

I am tracing it down to Modelview.cpp and I see it seems to call something else that I suppose draws a list (?) of
"surfaces".

Hum .. let's say I want to override/replace this with my own DrawEyeView() .. where do I render what in for the left/right eye ? I suppose there has to be a couple of textures and/or Render Targets I must use ?

If I understrand correctly "all the magic of Timewarp" happens/gets applied in "Frame" when

app->DrawEyeViewsPostDistorted() is called ?

Fundamentally "I don't want any scene", "I don't want anything", it's fine the DrawEyeView ( eye, fov ), just
tell me what I have to render where 🙂

Basically at some point I will be calling some glDrawArrays() or similar ( and lot else ).. I saw also you activate
culling, can I disable it ? We don't normally use culling.

I am trying to wrap my head around all this and once more again I'd be glad maybe in a future to see "a much more simplified" sample/template.

Thanks in advance for any help.
17 REPLIES 17

johnc
Honored Guest
Yes, you can completely replace the contents of DrawEyeView() in VrTemplate with your own code. It is already set up to render to the apropriate FBO (all triple buffered properly for async time warp) with all the picky tiler resolve optimization details handled, and the viewport / scissor is set. Draw anything you want here.

If you need to render other textures as part of your scene rendering, do it from the Frame() function, otherwise the transition out of one FBO to another one and back can cause performance problems for the tiler GPU, and you can usually share the work for both eyes.

You will probably want to copy / past / modify some of the view setup code from OvrSceneView::Frame() for the head / neck model and possibly joystick motion.

ox
Honored Guest
Ok But it's still not 100% clear to me, I look at the code I see this :


Matrix4f OvrSceneView::DrawEyeView(const int eye, const float fovDegrees) const
{
const Matrix4f mvpMatrix = MvpForEye( eye, fovDegrees );

glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glFrontFace(GL_CCW); // swapped due to axis interchange in timewarp for efficient CPU mapping

const DrawSurfaceList & surfs = BuildDrawSurfaceList(RenderModels,
mvpMatrix.Transposed());
(void)RenderSurfaceList(surfs);

DebugLines.Render( mvpMatrix.Transposed() );

return mvpMatrix;
}


So .. you tell me "are you using the same trick on the PC" where you had ONE RT/FBO "split in 2 halves" LEFT-RIGHT
and all one needs to do/know is get that MvpForEye() and render ?

I mean I don't have at that point to know/mind about "where is the viewport", where is the thing.

Literally "could it be" as silly as :


Matrix4f OvrSceneView::DrawEyeView(const int eye, const float fovDegrees) const
{
const Matrix4f mvpMatrix = MvpForEye( eye, fovDegrees );

glEnable(GL_DEPTH_TEST);

glBindVertexArray(myvertices);
glDrawArrays(GL_TRIANGLES, 0, 3);

return mvpMatrix;
}

And that would do ?

Also if I undestand correctly "Frame" is called per FRAME basis, so all the logic that needs per frame update should
be in there while DrawEyeView() obiviosly is called ( at least ) twice per frame and should "just render buffers" and nothing else ?

🙂

ox
Honored Guest
Erm ok I meant "drawing the scene" using that matrix that at this point I suppose is a camera * projection matrix or similar, I still have the freedom to use "world" as I wish right ?

Can I change to some different GlProgram() in there ( i.e. I mean use my own vertex/fragment shader ? )

johnc
Honored Guest
Yes, that is basically it.

You can load your own verts, programs, and textures and basically do whatever you want. If you want the screen edge vignette, pass through camera, and dialog panels to render correctly on the view after your drawing, you may need to leave the state reasonably clean when you are done, but if you don't care about those things it should still work no matter what you do. Keep us posted on any issues, you are probably the first major test case. We should try to force as much of the state as possible on our post-rendering to relieve you of the responsibility.

On mobile, each eye gets a separate render texture with the viewport covering the entire surface, which is important for getting good throughput on the tiled GPUs (well, less so now with async time warp, but still good general guidance).

ox
Honored Guest
Still Giles here, putting things togeter ..

So I am trying to draw from within DrawEyeView, that I modified someway as this :


Matrix4f OvrApp::DrawEyeView(const int eye, const float fovDegrees)
{
const Matrix4f mvpMatrix = view->MvpForEye( eye, fovDegrees );

LOG ("DrawEyeView called");

dev->UseVertexFormat(-1, -1); // invalidate current

dev->UseVertexFormat(0, 0);

dev->UpdateWvpMatrix((Matrix4f *) &mvpMatrix);

dev->DrawBufferContents( &my_buffer, GL_TRIANGLES,0,0 );

return mvpMatrix;
}


Apart various things, my code initialises that "My buffer" with 3 vertices as :


AddColorVertex(&my_buffer,
-0.1f, -0.1f, 5.5f,
0xffffffff,
0.0f,0.0f,
0.0f,0.0f );

AddColorVertex(&my_buffer,
0.1f, -0.1f, 5.5f,
0xffffffff,
0.0f,0.0f,
0.0f,0.0f );

AddColorVertex(&my_buffer,
0.0f, 0.1f, 5.5f,
0xffffffff,
0.0f,0.0f,
0.0f,0.0f );


With a shader so silly :


static char *vshader_basecol =
"\n"
" attribute vec3 Position; \n"
" attribute vec4 Color; \n"
" attribute vec2 UV; \n"
"\n"
" uniform mat4 m_WorldViewProj;\n"
"\n"
" varying vec2 oTexCoord;\n"
" varying vec4 thecolor; \n"
"\n"
" uniform vec4 vtint;\n"
"\n"
"void main()\n"
"{\n"
" gl_Position = vec4(Position,1.0f);\n" // m_WorldViewProj *
" thecolor = vec4(1.0f, 1.0f, 1.0f, 1.0f); " //Color; " // * vtint;\n"
" oTexCoord = UV;\n"
"}\n"
"";

static char *pshader_basecol =
"\n"
" varying vec4 thecolor; \n"
"\n"
"\n"
"void main()\n"
"{\n"
" gl_FragColor = thecolor; \n"
"}\n"
"";


However I try to draw .. I see nothing anywhere ..

My so far TEST - extremely simplified - draw thing does as this :


int NeonDeviceVR::DrawBufferContents (GenericVertexBuffer *vbuf, GLenum prim, int numvertices, int dataindex )
{
int stride;
int count;

count = numvertices;
if (!count) count = vbuf->vsize;

// bind the buffer we are using
glBindBuffer(GL_ARRAY_BUFFER, vbuf->buffer_id);

// check if we may need to update the vertex format
if ( vertex_format_to_use != current_vertex_format )
{
current_vertex_format = vertex_format_to_use;
switch (current_vertex_format)
{
case COLORVERTEX_BASIC:

I_LOG("Bind to colorvertex basic");

stride = sizeof (ColorVertex);

glEnableVertexAttribArray(ColorVertexElements::kVertexPosition);
glEnableVertexAttribArray(ColorVertexElements::kVertexColor);
glEnableVertexAttribArray(ColorVertexElements::kVertexUv);

glVertexAttribPointer(ColorVertexElements::kVertexPosition, 3, GL_FLOAT, GL_FALSE, stride, BUFFER_OFFSET(0) );
glVertexAttribPointer(ColorVertexElements::kVertexColor, 4, GL_UNSIGNED_BYTE, GL_FALSE, stride, BUFFER_OFFSET(12) );
glVertexAttribPointer(ColorVertexElements::kVertexUv, 2, GL_UNSIGNED_SHORT, GL_FALSE, stride, BUFFER_OFFSET(16) );



break;
}
}


// now let's see the program
if ( current_shader_program != shader_program_to_use )
{
current_shader_program = shader_program_to_use;
glUseProgram(MyShaderPrograms[shader_program_to_use].Program);

I_LOG("Using program %d",MyShaderPrograms[shader_program_to_use].Program);

// we changed the shader, better to re-update the vars
UpdateShaderVars(shader_program_to_use);
}

glBufferSubData(GL_ARRAY_BUFFER, dataindex, vbuf->size, vbuf->vptr.generic_ptr_banana );

I_LOG ("draw count %d",count);

// now we can draw
glDrawArrays (prim, dataindex, count );

// finally unbind the buffer
glBindBuffer(GL_ARRAY_BUFFER, 0);

return 0;
}
}


Everything gets called and appearently correct yet I can't see anything on display except a blank screen.

QUESTIONS time ..

1. Is it something I missing about what to render where/what ?

2. Is anywhere ANY documentation at all "about all this" saying what one should do what into that "VR template" thing to have things working ?

3. what am I missing here ? I am spending ( and I'll spend even more ) time reverse engineering that sample to understand where you draw, what you draw and how you are supposed to draw.

4. what is the "coordinate system" once I am into "DrawEyeView" and I get that matrix ? I suppose you use a right hand system with Z positive meaning away from the observer and looks like your znear is 0.01 and zfar = 2000 ?

I tried to change x,y,z, try to force different matrixes .. nothing .. not even a fragment of a polygon on screen.

Am I missing something ? DO I need to render into some specific FBO or such ?

Thanks in advance for any help ..

ox
Honored Guest
Right ... I just found out that all I missed was this :


glClearColor ( 0.0f, 0.0f, 0.0f, 1.0f ); //rgba, clear solid black
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);


I tought your application was already doing that somewhere ...

ox
Honored Guest
Also this matrix :


const Matrix4f mvpMatrix = view->MvpForEye( eye, fovDegrees );


Looks like needs to be transposed before to pass it to the shader ?

And/or something like that right ?


glUniformMatrix4fv(MyShaderPrograms[program].UniformsLoc[0] , 1, GL_TRUE, the_WvpMatrix.M[0] ); // TRUE per TRANSPOSE


I don't quite understand ( speed issues ? ) why you transpose it "outside" and then you pass it using GL_FALSE.

When I apply it to my shader that does :


" gl_Position = m_WorldViewProj * vec4(Position,1.0f);\n"


I can't see my triangle any more, however I am not sure what kind of supposed sensor position it's getting as I don't have any sensor attached now ( just the phone on the desk ).

If I pass an identity or a scale matrix I see correctly 2 triangles been rendered ( left and right eye ).

ox
Honored Guest
Ah yes .. the Z axis of course is reversed compared to mine ... yes ..

Now if I understand correctly "the last step" is I have to implement my own version of :


OvrSceneView::Frame


And call it from :


OvrApp::Frame(const VrFrame vrFrame)


Otherwise there's no sensor data/matrices that get generated right ?

Fundamentally I have to completely re-do ( according to my needs ) the whole OvrSceneView class removing all I don't
want/need if I understand correctly ?

ox
Honored Guest
Time 21:39 ... I finally have that triangle with correct colours inside "my world" and been tracked by the HMD unit.

I created my own simplified version of ViewFrame() by chopping out all the bits and pieces I did not want.

It seems to work ( have to check the story if to use the head model or not ) thing I am noticing is a certain chromatic abberration that gets a bit pronunced when moving the head.

Anyway tomorrow I will continue by adding my "test room" and see how it looks with a bit more complex object than a triangle in the middle of nowhere.

But yes .. I think "I am starting to get it".