04-14-2016 02:31 AM
MatrixStack & mv = Stacks::modelview();Any help would be appreciated!
mv.withPush([&]{
mv.identity();
glm::quat eyePose = ovr::toGlm(getEyePose().Orientation);
glm::quat webcamPose = ovr::toGlm(captureData.pose.Orientation);
glm::mat4 webcamDelta = glm::mat4_cast(glm::inverse(eyePose) * webcamPose);
mv.preMultiply(webcamDelta);
mv.translate(glm::vec3(0, 0, -IMAGE_DISTANCE));
texture->Bind(oglplus::Texture::Target::_2D);
oria::renderGeometry(videoGeometry, videoRenderProgram);
oglplus::DefaultTexture().Bind(oglplus::Texture::Target::_2D);
});
04-14-2016 08:49 PM
04-15-2016 02:27 PM
jherico said:
Most of this is wrapper code. The main point is that you render into the scene compensating for the difference in pose between the time of capture and the time of render. The line above where I multiply the inverse eye pose by the webcam pose.
04-18-2016 03:52 AM
04-18-2016 05:35 AM
CaptureData captured;
float captureTime =
ovr_GetTimeInSeconds() - CAMERA_LATENCY;
ovrTrackingState tracking =
ovrHmd_GetTrackingState(hmd, captureTime);
captured.pose = tracking.HeadPose.ThePose;
CaptureData captured;in the captureLoop(). Time to implement 4-6 cameras to display a full circle!
float captureTime =
ovr_GetTimeInSeconds() - CAMERA_LATENCY;
ovrTrackingState tracking =
ovrHmd_GetTrackingState(hmd, captureTime);
captured.pose = tracking.LeveledCameraPose;