Forum Discussion
whitehexagon
11 years agoExplorer
OpenGL ModelView for head movement & scale.
I'm working with JOGL, JOVR, SDK based rendering.
In my OpenGL gl init method I added some pretty standard lighting. What I've found though is that the lighting of my scene adjusts as I move my head! Like using the rift isnt freaky enough already :)
I'm guessing that is because I am transforming the whole scene rather than the camera. I've done some OpenGL long ago using gluLookAt. But I was hoping to find a modern example of how to do this that I can play around with to understand. The only Rift style example I found, and that I have been trying to get working is below. I'm OpenGL 2.1 based btw.
My math is bad, and it's the sort of code I got working once a long time ago, and never touched again, so please be patient with me. I'm having a read of my RED book, and I get the impression I might need a two step approach. So that I apply a model transform, do all my drawing, and then apply the view transform?
Related to this issue. I tried to render just a floor underneath. I did a transform on the y axis based on eye height. Drew a flat plane (which I finally managed to texture, yay!) in the X and Z plane. Using the following code I can look around the scene fine, but if I lower my head slightly I can already see under the 'floor'.
So I saw a note about scene scale in the developers guide. My plan was that 1 unit in OpenGL would be 1 meter, which would tie nicely into eye height. Is that a sensible approach, and if so where am I going wrong please?
At least this exercise has made me realise why in my old projects my blender exports where always 90 deg rotated. I always pictured z as being up in my game world vs y which seems to be more of a standard. oops.
Any help / code :) much appreciated!
In my OpenGL gl init method I added some pretty standard lighting. What I've found though is that the lighting of my scene adjusts as I move my head! Like using the rift isnt freaky enough already :)
I'm guessing that is because I am transforming the whole scene rather than the camera. I've done some OpenGL long ago using gluLookAt. But I was hoping to find a modern example of how to do this that I can play around with to understand. The only Rift style example I found, and that I have been trying to get working is below. I'm OpenGL 2.1 based btw.
My math is bad, and it's the sort of code I got working once a long time ago, and never touched again, so please be patient with me. I'm having a read of my RED book, and I get the impression I might need a two step approach. So that I apply a model transform, do all my drawing, and then apply the view transform?
Related to this issue. I tried to render just a floor underneath. I did a transform on the y axis based on eye height. Drew a flat plane (which I finally managed to texture, yay!) in the X and Z plane. Using the following code I can look around the scene fine, but if I lower my head slightly I can already see under the 'floor'.
So I saw a note about scene scale in the developers guide. My plan was that 1 unit in OpenGL would be 1 meter, which would tie nicely into eye height. Is that a sensible approach, and if so where am I going wrong please?
At least this exercise has made me realise why in my old projects my blender exports where always 90 deg rotated. I always pictured z as being up in my game world vs y which seems to be more of a standard. oops.
Any help / code :) much appreciated!
MatrixStack.PROJECTION.set(projections[eye]);
eyeRenderPose[eye].Orientation = pose[eye].Orientation;
eyeRenderPose[eye].Position = pose[eye].Position;
gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, fboIds[eye]);
gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
//draw HUD here?
MatrixStack mv = MatrixStack.MODELVIEW;
mv.push();
{
mv.preTranslate(RiftUtils.toVector3f(eyeRenderPose[eye].Position).mult(-1));
mv.preRotate(RiftUtils.toQuaternion(eyeRenderPose[eye].Orientation).inverse());
mv.preTranslate(RiftUtils.toVector3f(eyeRenderDescs[eye].ViewAdjust));
mv.translate(new Vector3f(0, eyeHeight, 0 )).scale(ipd);
FloatBuffer mvBuffer = ByteBuffer.allocateDirect(16*4).order(ByteOrder.nativeOrder()).asFloatBuffer();
MatrixStack.MODELVIEW.top().fillFloatBuffer(mvBuffer, true);
mvBuffer.rewind();
gl.glLoadMatrixf(mvBuffer);
// translate on the y axis -eyeHeight
//draw floor
// translate on the y axis +eyeHeight
//draw cube
gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, 0);
}
mv.pop();
9 Replies
- whitehexagonExplorerI finally fixed the lighting. I had both geometry and normal issues due to trying to rotate old source code models 90 deg to match Oculus coordinate system. I also accidentally brought in a spot light that was causing some strange effects as things were moving around.
Now I'm just struggling with scale. I've been reading lots, and even know what a gimbal is now, and the problems associated with them. I cant get my head around quaternions though, but at least some matrix multiplication is starting to make some sense. Anyway I could really use some help setting up the correct MODELVIEW matrix. I cant be the first to struggle with this?
So if eye height = 1.6m, and I translate on Y -1.6, draw the ground, and draw a box 10m away. Then I should not be able to move my head 30cm and see below or inside those objects...
This is my reshape:gl2.glMatrixMode(GLMatrixFunc.GL_PROJECTION);
gl2.glLoadIdentity();
GLU glu = new GLU();
glu.gluPerspective(45.0f, ((float) width / (float) height), 0.1f, 1000000.0f);
gl2.glMatrixMode(GLMatrixFunc.GL_MODELVIEW);
MatrixStack.MODELVIEW.set(player.invert());
recenterView();
and my initial FOV setupfor (int eye = 0; eye < 2; ++eye) {
fovPorts[eye] = hmd.DefaultEyeFov[eye];
projections[eye] = RiftUtils.toMatrix4f(
Hmd.getPerspectiveProjection(
fovPorts[eye], 0.1f, 1000000f, true));
}
and using the matrix calc from the first post. Any help would be appreciated, I need a weekend off :) - bluenoteExplorer1. Regarding the projection matrix: Why do you set up your projection matrix with a matrix obtained via gluPerspective? Use the projection matrices that you obtain from Hmd.getPerspectiveProjection.
2. Regarding the modelview matrix: It just all depends on what you want to achieve. And you have to bring your matrix multiplication in the right order. Lets assume you want to apply the following transformation w.r.t. a world scale: (1) translate to the camera position, (2) rotate the camera based the hmd rotation, and (3) translate IPD/2 for the left/right eye camera. Each transformation corresponds to a 4x4 affine matrix:
M1 = translation matrix of (negated) static position + positional part of the pose
M2 = rotation matrix obtained from the pose quaternion
M3 = translation matrix obtained via eye separation
And since the modelview matrix left-multiplies the vertex vectors you have to reverse the order to create the overall transformation:
Modelview = M3 * M2 * M1 - whitehexagonExplorerI'm just trying to get something like the Oculus config desk demo working. It would be nice to see an OpenGL version for that since the scale and movement feel spot on! :)
I'm also wanting to add some movement for 'walking around' or at least pushing the wheelie chair around :)
1. Initially I was using the same projection for each eye, which seemed to make sense from what I was reading. Anyway now I'm doing this:
(Please note that some of this code is from other people's demos and quite frankly beyond my mathematics ability.)public void display(GLAutoDrawable drawable) {
hmd.beginFrameTiming(++frameCount);
GL2 gl = drawable.getGL().getGL2();
Posef pose[] = new Posef[2];
pose[ovrEyeType.ovrEye_Left] = hmd.getEyePose(ovrEyeType.ovrEye_Left);
pose[ovrEyeType.ovrEye_Right] = hmd.getEyePose(ovrEyeType.ovrEye_Right);
for (int eyeIndex = 0; eyeIndex < ovrEyeType.ovrEye_Count; eyeIndex++){
int eye = hmd.EyeRenderOrder[eyeIndex];
eyeRenderPose[eye].Orientation = pose[eye].Orientation;
eyeRenderPose[eye].Position = pose[eye].Position;
gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, fboIds[eye]);
gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glLoadMatrixf(projectionDFB[eye]);
gl.glMatrixMode(GL2.GL_MODELVIEW);
MatrixStack mv = MatrixStack.MODELVIEW;
mv.push();
{
mv.preTranslate(RiftUtils.toVector3f(eyeRenderPose[eye].Position).mult(-1));
mv.preRotate(RiftUtils.toQuaternion(eyeRenderPose[eye].Orientation).inverse());
mv.preTranslate(RiftUtils.toVector3f(eyeRenderDescs[eye].ViewAdjust));
mv.translate(new Vector3f(0, eyeHeight, 0 ));
modelviewDFB.clear();
MatrixStack.MODELVIEW.top().fillFloatBuffer(modelviewDFB, true);
modelviewDFB.rewind();
gl.glLoadMatrixf(modelviewDFB);
//tiles on floor
gl.glEnable(GL2.GL_TEXTURE_2D);
gl.glBindTexture(GL2.GL_TEXTURE_2D, cheq.getId());
gl.glTranslatef(0.0f, -eyeHeight, 0.0f);
mof.drawPlaneXZ(gl);
gl.glTranslatef(0.0f, eyeHeight, 0.0f);
gl.glDisable(GL2.GL_TEXTURE_2D);
}
mv.pop();
}
gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, 0);
gl.glBindTexture(GL2.GL_TEXTURE_2D, 0);
gl.glDisable(GL2.GL_TEXTURE_2D);
hmd.endFrame(eyeRenderPose, eyeTextures);
}
@Override
public void reshape(GLAutoDrawable drawable, int x, int y, int width, int height) {
System.out.println("reshape loc "+x+","+y+" size "+width+"x"+height);
GL2 gl = drawable.getGL().getGL2();
gl.glMatrixMode(GL2.GL_PROJECTION);
for (int eye = 0; eye < 2; ++eye) {
MatrixStack.PROJECTION.set(projections[eye]);
gl.glMatrixMode(GL2.GL_PROJECTION);
projectionDFB[eye] = ByteBuffer.allocateDirect(16*4).order(ByteOrder.nativeOrder()).asFloatBuffer();
MatrixStack.PROJECTION.top().fillFloatBuffer(projectionDFB[eye], true);
projectionDFB[eye].rewind();
gl.glLoadMatrixf(projectionDFB[eye]);
}
gl.glMatrixMode(GLMatrixFunc.GL_MODELVIEW);
gl.glLoadIdentity();
MatrixStack.MODELVIEW.set(player.invert());
recenterView();
}
} //end inner class
private void recenterView() {
Vector3f center = Vector3f.UNIT_Y.mult(eyeHeight);
Vector3f eye = new Vector3f(0, eyeHeight, ipd * 5.0f);
player = Matrix4f.lookat(eye, center, Vector3f.UNIT_Y).invert();
hmd.recenterPose();
}
The lookAt does this:public static Matrix4f lookat(Vector3f eye, Vector3f center, Vector3f up) {
Vector3f f = center.subtract(eye).normalize();
Vector3f s = f.cross(up).normalize();
Vector3f u = s.cross(f);
Matrix4fTemp m = new Matrix4fTemp();
m.m00 = s.x;
m.m01 = s.y;
m.m02 = s.z;
m.m10 = u.x;
m.m11 = u.y;
m.m12 = u.z;
m.m20 = -f.x;
m.m21 = -f.y;
m.m22 = -f.z;
m.m03 = -s.dot(eye);
m.m13 = -u.dot(eye);
m.m23 = f.dot(eye);
return new Matrix4f(m);
}
So to me the M1 'static position' seems to be coming from the lookAt function. It seemed like what I originally posted had a strange scale(ipd) which I removed. I tried to do the M3.M2.M1 but it doesnt seem to be much different from M1.M2.M3. So I assume that means something is still very badly set up.
Symptoms currently. The floor (plane in XZ) shears when looking left and right, and even seems to tilt up slightly. The scale doesnt feel right, but that could be a symptom of the above. In fact testing all this is quite nauseating and I can only do so much each day which is kinda frustrating.
Also strange is that although the camera is directly in front of me, and my plane is drawn along the z axis, the scene starts with my view looking about 20deg to the left of where I would imagine the Z axis to be. ie. running my hmd to camera. - whitehexagonExplorerThings are getting worse the more I try to do this. Could it be anything to do with these neck-eye distances, ie. could I be pivoting around the wrong point? Or is it that the projection view has ipd offset and then the is applied again as part of the modelview? I'm clutching at straws at this point. Does anyone have an OpenGL snippet doing anything similar for me to study please? It seems there are a lot less 'developers' on these boards since DKs started arriving :)
- nuclearExplorer
"whitehexagon" wrote:
Does anyone have an OpenGL snippet doing anything similar for me to study please? It seems there are a lot less 'developers' on these boards since DKs started arriving :)
I have posted my minimal OpenGL oculus test code before in some other thread. I don't know if it'll help you solve your problem, but feel free to take a look: http://nuclear.mutantstargoat.com/hg/oculus2 (mercurial repository).
See my blog (in my sig) for a screenshot of what this code is doing, it's in my last blog-post. I tried posting the image link here but for some reason if I do that, my message gets flagged as spam... - bluenoteExplorerYes, maybe it is best if you just start with a working example. I do not understand what you are trying to achieve with this lookAt function, and it looks a bit as if you are going in a wrong direction here. I think you over-complicate things if you try to get the viewing matrix from an eye+center+up vector. The rotational part of your transformation can be easily obtained from the pose quaternion. Convert this quaternion to a 3x3 rotation matrix, and embed it in a 4x4 matrix. If you are using immediate mode, you can use glMultMatrix to apply the rotation. If you are using shaders, you have to apply the transformation in a "translate -- rotate -- translate" order -- for example by representing all 3 steps as a 4x4 matrix and multiplying them. You could also do that in immediate mode and use glLoadMatrix with the resulting 4x4 matrix.
- tmason101Honored Guest
"bluenote" wrote:
Yes, maybe it is best if you just start with a working example. I do not understand what you are trying to achieve with this lookAt function, and it looks a bit as if you are going in a wrong direction here. I think you over-complicate things if you try to get the viewing matrix from an eye+center+up vector. The rotational part of your transformation can be easily obtained from the pose quaternion. Convert this quaternion to a 3x3 rotation matrix, and embed it in a 4x4 matrix. If you are using immediate mode, you can use glMultMatrix to apply the rotation. If you are using shaders, you have to apply the transformation in a "translate -- rotate -- translate" order -- for example by representing all 3 steps as a 4x4 matrix and multiplying them. You could also do that in immediate mode and use glLoadMatrix with the resulting 4x4 matrix.
Hello,
I have been having problems with this as well; is the quaternion retrieved from the Oculus SDK directly transferable to a GLM quaternion or glm::mat3 (3x3) matrix?
From another thread I started it seems like you have to do some multiplication against the matrix first to change handedness.
I assume this takes care of all rotation (including head tilting). As it stands now I got everything down except when (in my code) you physically tilt left in the 3D environment you tilt right.
Thanks. - DiConHonored Guesthere is, how I do it (although it might be not the fastest implementation as I first create a matrix from everything, which I find easier to understand...):
for (int eyeIndex = 0; eyeIndex < ovrEye_Count; eyeIndex++) {
ovrEyeType eye = hmd->EyeRenderOrder[eyeIndex];
headPose[eye] = ovrHmd_GetEyePose(hmd, eye);
mat4 ovrOrient = toMat4(quat(
headPose[eye].Orientation.w,
-headPose[eye].Orientation.x,
-headPose[eye].Orientation.y,
-headPose[eye].Orientation.z));
mat4 ovrPos = glm::translate(glm::mat4(1.f),
-vec3(headPose[eye].Position.x,
headPose[eye].Position.y,
headPose[eye].Position.z));
eyeView[eye] = eyeAdjust[eye] * ovrOrient * ovrPos * View;
eyeVP[eye] = eyeProjection[eye] * eyeView[eye];
}
Of course, headPose[2], eyeView[2] and eyeVP[2] have been declared before. eyeProjection[2] is the transposed result from ovrMatrix4f_Projection(EyeRenderDesc[eye].Fov, zNear, zFar, true) for each eye and View is the View transformation "without the Rift", if necessary (i.e. mouse control or the orientation of a cockpit to which the user is fixed).
Also note, that I am using several parts of glm, although I cannot tell right away, which are necessary for this code snipped:#define GLM_SWIZZLE
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
#include <glm/gtc/quaternion.hpp>
#include <glm/gtx/quaternion.hpp>
In regard to the discussion here, you should probably take notice of the sign-reversed quaternion, the handedness of the projection matrix (last parameter = true) and that the projection is transposed.
edit: Forgot to tell you, what eyeAdjust is, because I do it once when I retrieve the projection matrix:
eyeAdjust[eye] = glm::translate(glm::mat4(1.f),
vec3(EyeRenderDesc[eye].ViewAdjust.x,
EyeRenderDesc[eye].ViewAdjust.y,
EyeRenderDesc[eye].ViewAdjust.z)); - AndyTheBaldHonored GuestHey DiCon
I'm having a similar problem to you with our left-handed game engine https://forums.oculus.com/viewtopic.php?f=20&t=17841 .
Except when I invert the quaternion, and rotate the HMD in 2 axes, it produces the wrong transform. Does this work for you? How did you fix it?
Cheers
Andy
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 3 years ago
- 1 year ago