Forum Discussion
Oero
10 years agoHonored Guest
Rendring a webcam that stands "still" - Solved
Hey!
Been trying out my Oculus Rift the last few months and wanted to start to develop something for myself. I want to start off with creating an environment where I have a webcam that stands still in the environment and doesnt follow when you move your head.
I have based my work on Brad Davis' example code from "Oculus Rift in Action" and the example porject called HighResWebcamDemo. So does anyone know what I need to add to the code to make the camera stand still in the 3D environment instead of following my head movement?
He crates the MatrixStack as following:
Been trying out my Oculus Rift the last few months and wanted to start to develop something for myself. I want to start off with creating an environment where I have a webcam that stands still in the environment and doesnt follow when you move your head.
I have based my work on Brad Davis' example code from "Oculus Rift in Action" and the example porject called HighResWebcamDemo. So does anyone know what I need to add to the code to make the camera stand still in the 3D environment instead of following my head movement?
He crates the MatrixStack as following:
MatrixStack & mv = Stacks::modelview();Any help would be appreciated!
mv.withPush([&]{
mv.identity();
glm::quat eyePose = ovr::toGlm(getEyePose().Orientation);
glm::quat webcamPose = ovr::toGlm(captureData.pose.Orientation);
glm::mat4 webcamDelta = glm::mat4_cast(glm::inverse(eyePose) * webcamPose);
mv.preMultiply(webcamDelta);
mv.translate(glm::vec3(0, 0, -IMAGE_DISTANCE));
texture->Bind(oglplus::Texture::Target::_2D);
oria::renderGeometry(videoGeometry, videoRenderProgram);
oglplus::DefaultTexture().Bind(oglplus::Texture::Target::_2D);
});
4 Replies
- jhericoAdventurerMost of this is wrapper code. The main point is that you render into the scene compensating for the difference in pose between the time of capture and the time of render. The line above where I multiply the inverse eye pose by the webcam pose.
- OeroHonored Guest
jherico said:
Most of this is wrapper code. The main point is that you render into the scene compensating for the difference in pose between the time of capture and the time of render. The line above where I multiply the inverse eye pose by the webcam pose.
Ok, but then what should I change to have the "screen", showing the webcamera-feed, stand still in the 3D room? I want the screen to be at one point in the room like you have done with the ColorCube in Example_5_4_RiftSensors. I looked at the difference in the two shaders you used when calling on the loadProgram function in the two examples, RiftSensors and HighResWebcamDemo, but I dont know what to look for.
I want the cameras to stand still in the room and not always be in front of the view. - OeroHonored GuestI have now tried something else that sort of works as I want it to.
Setting the ovrTrackingState only on the first gathered state. This results in FPS loss, but the 2D texture with the camera feed stands still in the 3D space.
Is there a better way to do it? - OeroHonored GuestI solved my problem for now. I changed from:
CaptureData captured;
float captureTime =
ovr_GetTimeInSeconds() - CAMERA_LATENCY;
ovrTrackingState tracking =
ovrHmd_GetTrackingState(hmd, captureTime);
captured.pose = tracking.HeadPose.ThePose;to:CaptureData captured;
in the captureLoop(). Time to implement 4-6 cameras to display a full circle!
float captureTime =
ovr_GetTimeInSeconds() - CAMERA_LATENCY;
ovrTrackingState tracking =
ovrHmd_GetTrackingState(hmd, captureTime);
captured.pose = tracking.LeveledCameraPose;
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device