cancel
Showing results for 
Search instead for 
Did you mean: 

Oculus Rift + Point Sprites + Point size attenuation.

Gordath
Honored Guest
Hello there,

I am coding a small project with Oculus Rfit support, and i use point sprites to render my particles. I calculate the size of the point sprites in pixels, based on their distance from the "camera" in the vertex shader. When drawing on the default screen (not on the Rift) the size works perfectly, but when i switch to the Rift i notice these phenomena:

>The particles on the Left Eye are small and get reduced in size very rapidly.
>The particles on the Right Eye are huge and do not change in size.

Screenshots:
Rift disabled: http://i.imgur.com/EoguiF0.jpg
Rift enabled: http://i.imgur.com/4IcBCf0.jpg

Here is the vertex shader:
#version 120

attribute vec3 attr_pos;
attribute vec4 attr_col;
attribute float attr_size;

uniform mat4 st_view_matrix;
uniform mat4 st_proj_matrix;
uniform vec2 st_screen_size;

varying vec4 color;

void main()
{
vec4 local_pos = vec4(attr_pos, 1.0);
vec4 eye_pos = st_view_matrix * local_pos;
vec4 proj_voxel = st_proj_matrix * vec4(attr_size, 0.0, eye_pos.z, eye_pos.w);
float proj_size = st_screen_size.x * proj_voxel.x / proj_voxel.w;

gl_PointSize = proj_size;
gl_Position = st_proj_matrix * eye_pos;

color = attr_col;
}

The st_screen_size uniform is the size of the viewport. Since i am using a single frambuffer when rendering on the Rift (1 half for each eye), the value of st_screen_size should be (frabuffer_width / 2.0, frambuffer_height).

Here is my draw call:
    /*Drawing starts with a call to ovrHmd_BeginFrame.*/
ovrHmd_BeginFrame(game::engine::ovr_data.hmd, 0);

/*Start drawing onto our texture render target.*/
game::engine::ovr_rtarg.bind();
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

//Update the particles.
game::engine::nuc_manager->update(dt, get_msec());

/*for each eye... */
for(unsigned int i = 0 ; i < 2 ; i++){
ovrEyeType eye = game::engine::ovr_data.hmd->EyeRenderOrder[i];
/* -- Viewport Transformation --
* Setup the viewport to draw in the left half of the framebuffer when we're
* rendering the left eye's view (0, 0, width / 2.0, height), and in the right half
* of the frambuffer for the right eye's view (width / 2.0, 0, width / 2.0, height)
*/
int fb_width = game::engine::ovr_rtarg.get_fb_width();
int fb_height = game::engine::ovr_rtarg.get_fb_height();

glViewport(eye == ovrEye_Left ? 0 : fb_width / 2, 0, fb_width / 2, fb_height);

//Send the Viewport size to the shader.
set_unistate("st_screen_size", Vector2(fb_width /2.0 , fb_height));

/* -- Projection Transformation --
* We'll just have to use the projection matrix supplied but he oculus SDK for this eye.
* Note that libovr matrices are the transpose of what OpenGL expects, so we have to
* send the transposed ovr projection matrix to the shader.*/
proj = ovrMatrix4f_Projection(game::engine::ovr_data.hmd->DefaultEyeFov[eye], 0.01, 40000.0, true);

Matrix4x4 proj_mat;
memcpy(proj_mat[0], proj.M, 16 * sizeof(float));

//Send the Projection matrix to the shader.
set_projection_matrix(proj_mat);

/* --view/camera tranformation --
* We need to construct a view matrix by combining all the information provided by
* the oculus SDK, about the position and orientation of the user's head in the world.
*/
pose[eye] = ovrHmd_GetHmdPosePerEye(game::engine::ovr_data.hmd, eye);

camera->reset_identity();

camera->translate(Vector3(game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.x,
game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.y,
game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.z));

/*Construct a quaternion from the data of the oculus SDK and rotate the view matrix*/
Quaternion q = Quaternion(pose[eye].Orientation.w, pose[eye].Orientation.x,
pose[eye].Orientation.y, pose[eye].Orientation.z);
camera->rotate(q.inverse().normalized());


/*Translate the view matrix with the positional tracking*/
camera->translate(Vector3(-pose[eye].Position.x, -pose[eye].Position.y, -pose[eye].Position.z));

camera->rotate(Vector3(0, 1, 0), DEG_TO_RAD(theta));

//Send the View matrix to the shader.
set_view_matrix(*camera);



game::engine::active_stage->render(STAGE_RENDER_SKY | STAGE_RENDER_SCENES | STAGE_RENDER_GUNS |
STAGE_RENDER_ENEMIES | STAGE_RENDER_PROJECTILES, get_msec());
game::engine::nuc_manager->render(RENDER_PSYS, get_msec());
game::engine::active_stage->render(STAGE_RENDER_COCKPIT, get_msec());
}

/* After drawing both eyes into the texture render target, revert to drawing directly to the display,
* and we call ovrHmd_EndFrame, to let the Oculus SDK draw both images properly, compensated for lens
* distortion and chromatic abberation onto the HMD screen.
*/
game::engine::ovr_rtarg.unbind();

ovrHmd_EndFrame(game::engine::ovr_data.hmd, pose, &game::engine::ovr_data.fb_ovr_tex[0].Texture);


This problem has troubled me for many days now...and i feel like i have reached a dead end. I could just use billboarded quads.....but i don't want to give up that easily 🙂 Plus point sprites are faster.
Do the math behind Point size attenuation based on distance change when rendering on the Rift?
Am a not taking something into account?
Math is not (,yet at least) my strongest point. 🙂 Any insight will be greatly appreciated!

PS: If any additional information is required about the code i posted, i will gladly provide it.
3 REPLIES 3

vrdaveb
Oculus Staff
It sounds like your second pass might be interfering with your first. You could try rendering to two separate eye textures to confirm. Be sure to use glScissor to restrict each eye's rendering commands to the part of the texture you are interested in. Try calling the following after glViewport:
glScissor(0, GetSystemMetrics(SM_CYFULLSCREEN)/2, GetSystemMetrics(SM_CXFULLSCREEN), GetSystemMetrics(SM_CYFULLSCREEN)/2+100);
glEnable(GL_SCISSOR_TEST);

nuclear
Explorer
"Gordath" wrote:


vec4 local_pos = vec4(attr_pos, 1.0);
vec4 eye_pos = st_view_matrix * local_pos;
vec4 proj_voxel = st_proj_matrix * vec4(attr_size, 0.0, eye_pos.z, eye_pos.w);
float proj_size = st_screen_size.x * proj_voxel.x / proj_voxel.w;

gl_PointSize = proj_size;


Basically you are first transforming your point to view space to figure out it's Z coordinate in view space (distance from the viewer) and then you're constructing a vector aligned with the X axis with the desired particle size, and projecting that to see how many pixels it covers when projected and viewport-transformed (sortof).

This is perfectly reasonable, assuming your projection matrix is symmetrical. This assumption is wrong when dealing with the rift. I've drawn a diagram to illustrate the problem better:

diagram.jpg

As you can see, when the frustum is assymetrical, which is certainly the case with the rift, using the distance of the projected point from the center of the screen will give you wildly different values for each eye, and certainly different from the "correct" projection size you're looking for.

What you must do instead, is project two points, say (0, 0, z, 1) AND (attr_size, 0, z, 1), using the same method, and compute their difference in screen space (after projection, perspective divide, and viewport).
John Tsiombikas webpage - blog - youtube channel

Gordath
Honored Guest
Ty all for your input on the matter. Nuclear this is the solution i was looking for thank you very much!
Still need help?

Did this answer your question? If it didn’t, use our search to find other topics or create your own and other members of the community will help out.

If you need an agent to help with your Meta device, please contact our store support team here.

Having trouble with a Facebook or Instagram account? The best place to go for help with those accounts is the Facebook Help Center or the Instagram Help Center. This community can't help with those accounts.

Check out some popular posts here:

Getting Help from the Meta Quest Community

Tips and Tricks: Charging your Meta Quest Headset

Tips and Tricks: Help with Pairing your Meta Quest

Trouble With Facebook/Instagram Accounts?