are not predicted correctly during high-speed (10 meters/sec) hand controller motions. The reported positions lag behind real-world positions by 10 centimeters or more, corresponding to at least 10 milliseconds of latency. The rendering is very simple and no frames are being dropped. I would like to know whether the poses are being predicted for the next frame display time, and see exactly what Oculus SDK calls are being made (e.g. GetPredictedDisplayTime(), GetTrackingPoseState()). But those calls are made in the Unity Oculus ovrplugin.dll in a routine called ovrp_GetNodePose() and I have not been able to find source code for that library.
Further testing shows that with Unity 2017.1 and Oculus Utilities 1.18.1 the touch controller velocity is reported as zero whenever the controller exceeds 10 meter/sec. Reports correct velocity at 9.9 meter/sec. I'm getting the velocity from OVRInput.GetLocalControllerVelocity(). It is unclear if the ovrplugin.dll is reseting high speeds to zero, or if it is the native Oculus SDK for PC. I do not see any report of this limitation on the web.
Thanks for this info. The OVRCameraRig option "Used fixed update for tracking" is disabled. And I obtain the hand controller poses in the Update() routine, and am not using FixedUpdate() because I am not using Unity PhysX, rigidbodies, or colliders.
It is surprising that the predicted poses at the next frame draw are not provided. Something as basic as displaying an object held in the hand will not be rendered in the correct position, instead suffering approximately 1 frame (11 millisecond) latency. When trying to hit a high speed target, like a ping pong ball with a paddle, 10 milliseconds is a big error, at 10 meter/sec that is a 10 centimeter error, and the radius of a ping pong paddle surface is less than 10 centimeters.
I have programmed VR headsets since Oculus DK1 came out about 3 years ago and used the native Oculus SDK, and also the SteamVR SDK for Vive headsets, and it is really essential to a VR API that the time of pose prediction can be specified. Adding such a capability to the Unity Oculus VR API would be a good idea.
If you have any work-around I'd be happy to hear it. I can predict the poses in my own code using the linear and angular velocities and accelerations but I think that opens a can of worms -- I expect the native Oculus SDK pose prediction is more sophisticated, perhaps including all available inertial measurement unit data. I saw an OVRPlugin method UpdateNodePhysicsPoses() that takes a prediction time. I see it calls an Update2() method which takes an argument Step.Physics I guess to set the prediction time for FixedUpdate() tracking poses. Could I call Update2() with Step.Render to set the prediction time for poses obtained in Monobehavior.Update()? I would still have the problem that OVRPlugin does not appear to provide access to the Oculus SDK GetPredictedDisplayTime() method that I would need to determine what prediction time to use.
I really appreciate and am surprised by your quick response to my original question! Thanks!