Forum Discussion
Teddy0k
12 years agoExplorer
Idea - How to get fast head tracking with any frame rate
Here's an idea of how to get low latency head tracking with almost any variable frame rate.
Render the scene from the player's point of view with no distortion, but to a larger render target with a wider FOV. The increased FOV should be large enough to take into account how fast a user might turn their head in one frame (I figure ~8 degrees?);

Then at a faster update rate (60 or 120 times a second?) sample how much the HMD orientation has changed since the last full render and calculate how far you'd need to pan and rotate the image.

You can then sample from the render in the highlighted frame and run the distortion effect on this and send this to the Oculus's screen. This would would eliminate any variability in the update rate of the head tracking for the user and would not require games to run at a stable 60 fps to have a smooth experience. Note: The player's movement and the simulation of the world would still update at the normal frame rate.
Ideally this whole process could run in hardware on the Oculus Rift itself. Doing so would require the hardware be able to apply the distortion and to accept a render target larger than the screen itself, which would need to be aware of the orientation it was rendered to.
Alternatively, doing this on the GPU is quite tricky. Running parallel updates alongside the normal scene render is not possible in any GPUs that I'm aware of, it would require a separate render pipeline. One other application of this would be to simply apply this technique at the distortion step of the current frame. This would reduce the amount of latency in the update by a little less than 1 frame (~ 16ms-30ms).
Is my thinking correct here? Any see any reasons why this couldn't work?
Render the scene from the player's point of view with no distortion, but to a larger render target with a wider FOV. The increased FOV should be large enough to take into account how fast a user might turn their head in one frame (I figure ~8 degrees?);

Then at a faster update rate (60 or 120 times a second?) sample how much the HMD orientation has changed since the last full render and calculate how far you'd need to pan and rotate the image.

You can then sample from the render in the highlighted frame and run the distortion effect on this and send this to the Oculus's screen. This would would eliminate any variability in the update rate of the head tracking for the user and would not require games to run at a stable 60 fps to have a smooth experience. Note: The player's movement and the simulation of the world would still update at the normal frame rate.
Ideally this whole process could run in hardware on the Oculus Rift itself. Doing so would require the hardware be able to apply the distortion and to accept a render target larger than the screen itself, which would need to be aware of the orientation it was rendered to.
Alternatively, doing this on the GPU is quite tricky. Running parallel updates alongside the normal scene render is not possible in any GPUs that I'm aware of, it would require a separate render pipeline. One other application of this would be to simply apply this technique at the distortion step of the current frame. This would reduce the amount of latency in the update by a little less than 1 frame (~ 16ms-30ms).
Is my thinking correct here? Any see any reasons why this couldn't work?
22 Replies
- raidho36ExplorerThe HMD doesn't know whether or not there's new frame arrived. So you'd have to call a corresponding function to signal an HMD to reset it's warping to zero.
Anyway, this shouldn't be used to render at smaller framerates, because motion jerkiness is exactly what causes motion sickness due to FPS drops. Just hack into your sources and make it only update objects' positions every 100 milliseconds, while keeping headtracking steady on, and you will see for yourself. Therefore, this should be used to compensate for sudden framerate drops and little freezes so that they won't be as devastating - you should still aim at smooth 60 fps experience. - geekmasterProtege
"raidho36" wrote:
... Anyway, this shouldn't be used to render at smaller framerates, because motion jerkiness is exactly what causes motion sickness due to FPS drops.
I disagree. It is head tracker jerkiness and latency that causes motion sickness. Having objects in your environment periodically "freeze" or move in a jerky fashion may look odd, but will not cause motion sickness as long as head tracking is keeping the VR environment anchored in your vision where your brain thinks it should be.
As long as the things moving "unnaturally" look mechanical, we should not have problems. But ultra-realistic humans moving in a strange way will trigger our "alien detectors" causing us to fall into the "uncanny valley" with rampant "danger/fear/discomfort" emotional responses. Perhaps adrenaline, but not queasiness, I think..."raidho36" wrote:
... Therefore, this should be used to compensate for sudden framerate drops and little freezes so that they won't be as devastating - you should still aim at smooth 60 fps experience.
Agreed... But not just during SUDDEN framerate drops. I think it can compensate for continuous framerate "drops" such as when running a lot of detail on hardware that is not up to that task. In my mind, I picture getting more immersion by "faking" a higher framerate. Regarding updating the VR environment, we are accustomed to a 24 FPS experience when watching motion pictures, and depending on content you can go even down to about 8 to 15 FPS (depending on brightness and contrast, and other factors) and still perceive it as smooth motion."tomf" wrote:
... The reprojection is ... a full rotation and projection onto a plane. But it works well for orientation, and will probably be included in the next SDK.
Regarding what tomf said here, it sounds like projecting the framebuffer onto a flat screen in the head-tracked VR environment, much like watching a movie in one of the Rift movie player apps. Besides orientation changes, it should work for limited position changes too, as long as there are no nearby objects to cause annoying parallax errors.
However, I have played with simple morphing between 4 FPS video frames to get 30 FPS, and although you can the tweened frames are morphed, it is still MUCH better than a low framerate. With head tracking it would be even better. And add in motion vector interpolation and it can be simply amazing. Using motion vectors, I have created 10x slow-mo by inserting 9 interpolated frames between each input frame, and it looks amazingly realistic.
Depending on power requirements, we could do this at multiple adjustable levels, from simple PTZ all the way to motion vector interpolation (using the same motion vector extraction as used in MPEG encoding).
So for the OculusVR SDK, sure, just do it to replace dropped frames. But for low power devices, do it all the time for intentionally low rendering framerates (to allow for additional rendered detail). - raidho36ExplorerI would argue about first two points, and I even wrote something up, but then I realized you just pulling these things out of nothing so no, I think I won't argue, not with this.
As I said, implement low world framerate against 60fps Rift framerate and see for yourself.
The tweening is a reasonable solution, and if we're going above 200fps that's the only thing we've got, but it's ultimately only a crude approximation. One shouldn't rely on it to resolve low framerate issues. - geekmasterProtege
"raidho36" wrote:
I would argue about first two points, and I even wrote something up, but then I realized you just pulling these things out of nothing so no, I think I won't argue, not with this.
As I said, implement low world framerate against 60fps Rift framerate and see for yourself.
The tweening is a reasonable solution, and if we're going above 200fps that's the only thing we've got, but it's ultimately only a crude approximation. One shouldn't rely on it to resolve low framerate issues.
Man, your respectability just ratcheted down a notch in my mind after reading that...
Actually, I am not "just pulling these things out of nothing". I have been playing with animation and 3D for more than 50 years. My first animation used a pair of hand-soldered R-2R ladder network DACs driving an oscilloscope, back in the early 70's. In 1978 I drove HeNe lasers using galvos harvested from laser disc players using my Apple 2, projecting animated Gumby and Pokey characters on the wall. And even earlier, I drew 3D anaglyph images using red and blue colored pencils, and I made "flip card" animations, back in the mid 60's. I also have a 16mm film camera with 'antique" stereoscopic lenses. I have experience in tweening both hand drawn animation and video interpolation algorithms, and like I said, the quality can be amazing. That is from real first hand experience, and not just "from nothing" as you seem to think.
Do you have evidence that I am just pulling my information out of my ass, as you seem to be implying, or are you just pulling that unsupported malificent claim out of yours?
As I said, movie films update at 24 FPS. And they are still immersive. However, each frame is typically displayed FOUR TIMES before advancing, to prevent flicker. Tweening to change each projected "intermediate" frame would be even better. Of course the framerate (96Hz at theaters) cannot be low. But real life movie goers know that 24FPS is adequate for immersion (especially if you sit front-row center like I do).
Tweening need not be crude. Using motion vector prediction helps a lot. I know. I did that ten years ago...
Why do some people choose to automatically discount what I say, just because my experience in these areas may be (significantly) greater than theirs? You may know a lot, but I am sure that our experiments and experience do not have complete overlap, and we may each have our own niches in which our claims are valid. Differences can be learned from, and should not just be discarded so lightly as you seem so inclined. A shamefully lost learning opportunity, really. - raidho36Explorer
I have been playing with animation and 3D for more than 50 years.
I've been playing with animation and 3D for, I dunno, 2 weeks total? And came to same conclusions about the tweening and everything, I even came up with the same method (displacement vector matrix) but I instantly discarded it as a silverbullet solution due to it's fundamental flaws, which you did not for some reason. And yeah you have some great time span but what about animation and 3d in VR? Because it's very different from 2d screen, stereoscopic or not. Conventional rules don't apply here. That's what I've meant actually. And from this perspective, your claims have little basis. My counter-claims are supported by actual experience. I didn't dig too far in this and didn't built test apps, though. No offence, but your "robotish style" argument is really silly - and pulled right out of your ass, too.movie films update at 24 FPS
Here we go again. This is already been debated so much that I won't be rolling off the arguments all over again just for you specifically, you can just web search for "30 fps in video games". I'll just mention that modern digital television cameras aim at 60 and higher framerate.A shamefully lost learning opportunity, really.
I appreciate your research done and knowledge gained, but it doesn't apply to VR. You'd have to learn all over again. - geekmasterProtege
"raidho36" wrote:
I have been playing with animation and 3D for more than 50 years.
I've been playing with animation and 3D for, I dunno, 2 weeks total? And came to same conclusions about the tweening and everything, I even came up with the same method (displacement vector matrix) but I instantly discarded it as a silverbullet solution due to it's fundamental flaws, which you did not for some reason. And yeah you have some great time span but what about animation and 3d in VR? Because it's very different from 2d screen, stereoscopic or not. Conventional rules don't apply here. That's what I've meant actually. And from this perspective, your claims have little basis. My counter-claims are supported by actual experience. I didn't dig too far in this and didn't built test apps, though. No offence, but your "robotish style" argument is really silly - and pulled right out of your ass, too.movie films update at 24 FPS
Here we go again. This is already been debated so much that I won't be rolling off the arguments all over again just for you specifically, you can just web search for "30 fps in video games". I'll just mention that modern digital television cameras aim at 60 and higher framerate.A shamefully lost learning opportunity, really.
I appreciate your research done and knowledge gained, but it doesn't apply to VR. You'd have to learn all over again.
Another few notches down the respect scale on that one...
A rather overly strong, derogatory and disrespectful opinion from an "expert" with a whole 2 weeks of experience... :(
I have a large collection of VR books from the 80's and 90's. I also played with "real" VR back then in expensive VR arcades. It had a MUCH slower framerate than a Raspberry Pi can do, and yet was fully immersive. Of course, it had a much smaller FoV, but what it had going for it is position tracking and no externally controlled movements (such as mouse or keyboard).
Position tracking is FAR more important than framerate, but head-tracked video framerate is much more important than environmental animation.
This is all very subjective and based on personal perception, which is based on experience. Google findings are not all that reliable when it comes to subjective experience. Your limited testing was probably biased with no reliable control group, and certainly could not have studied a large sample group for their common (and differential) subjective experience.
Your disrespectful attitude will prevent you from learning anything much beyond your tiny realm of experience, despite your grandiose self-esteem. - raidho36ExplorerActually I just can't recall how much I've been doing all this 3d research and to what extent, so I just picked a random number - because it's irrelevant. See, regardless of experience level, we came up to the same methods and conclusions, therefore 1) there's nothing special about them and 2) they're obvious surface level solutions. You came up with a surface level solution - not to be offensive, because so did I. It needs more proper research, and it's extremely far from your excuse for low framerate.
Yes I know already that continious updating of the rendering based off head tracking readings is crucial. But so is animation smoothness. I don't think you're getting what i'm telling, just see the following video. And then imagine playing this game in 24 fps.
This is an extreme case, but it is simply for the sake of very vibrant example - the same rule about animation smoothness through high framerate applies to every game. This is why the Rift itself aims at as high framerate as possible, and advertises everyone to aim at such framerates. - antigravityExplorer
"geekmaster" wrote:
As I said, movie films update at 24 FPS. And they are still immersive. However, each frame is typically displayed FOUR TIMES before advancing, to prevent flicker. Tweening to change each projected "intermediate" frame would be even better. Of course the framerate (96Hz at theaters) cannot be low. But real life movie goers know that 24FPS is adequate for immersion (especially if you sit front-row center like I do).
The thing is if you pause a frame of panning 24fps film its beautifully and naturally motion-blurred to hell. Thats what makes it visually flow from frame to frame.
If you pause a pixar movie, you'll see the same effect as part of their renders.
The problem is, its computationally extremely expensive to do high quality motion-blur with a decent amount of samples.. its always going to be cheaper to do 60fps with no motion blur, than 24fps with true high quality motion blur..by far! (the ingame motion blur crap in current titles doesnt cut it.)
p.s., love your posts geekmaster.. always intriguing! - geekmasterProtege
"antigravity" wrote:
... The problem is, its computationally extremely expensive to do high quality motion-blur with a decent amount of samples.. its always going to be cheaper to do 60fps with no motion blur, than 24fps with true high quality motion blur..by far! (the ingame motion blur crap in current titles doesnt cut it.)
p.s., love your posts geekmaster.. always intriguing!
Morphing (especially with motion vectors) is far superior to motion blur.
And regarding those tunnel effects in the previously posted video, that is where the zoom (Z in PTZ) comes in. Panning and zooming can do wonders for interpolating the missing frames...
In fact, I was browsing the recently published Farbrausch source code, and I discovered that "PTZ Tweening" (including rotation) was critical to the amazing demos running on very low-powered hardware back in the day. Of course I did not "invent" it, but I was trying hard to get it in public view here and there so people could begin to use it to improve immersion (both on modern high-power, and on older low-power) computing equipment.
And people ARE starting to experiment with these ideas (as mentioned by Brendan Iribe and tomf). They are old ideas, but then have been neglected on modern hardware (until now). Perhaps my suggestions helped raise awareness. But no matter, as long as progress is made in the right directions.
EDIT: Farbrausch demo source code:
https://github.com/blog/1103-ten-years-of-farbrausch-productions-on-github
https://github.com/farbrausch/fr_public
Some of this "ancient" code contains ideas and tricks VERY useful for modern VR, IMHO... - geekmasterProtege
"raidho36" wrote:
Actually I just can't recall how much I've been doing all this 3d research and to what extent, so I just picked a random number - because it's irrelevant. See, regardless of experience level, we came up to the same methods and conclusions, therefore 1) there's nothing special about them and 2) they're obvious surface level solutions. You came up with a surface level solution - not to be offensive, because so did I. It needs more proper research, and it's extremely far from your excuse for low framerate.
Yes I know already that continious updating of the rendering based off head tracking readings is crucial. But so is animation smoothness. I don't think you're getting what i'm telling, just see the following video. And then imagine playing this game in 24 fps.
This is an extreme case, but it is simply for the sake of very vibrant example - the same rule about animation smoothness through high framerate applies to every game. This is why the Rift itself aims at as high framerate as possible, and advertises everyone to aim at such framerates.
The "high speed" tunnel video above demonstrates that your animation speed must be compatible with your available update rate, to avoid temporal aliasing (i.e. "wagonwheel effect").
What you need depends heavily on what you are trying to do. Filming a bullet impact may require 1 MILLION frames per second:
However, if your CONTENT is compatible with a lower update rate, you can still achieve good immersion with such a low update rate, by morphing the framebuffer at high speed with head tracker data, using a method such as "PTZ Tweening". Of course, compensating for head roll requires that the zoom (Z in PTZ) actually be implemented as "rotazoom".
My interest is in lower-speed VR exploration and in-game construction. I suspect that what I want for my own personal VR use will be a popular activity for many people. Racing games may not fit well into this category, and the "PTZ Tweening" methods I suggested may be only of limited use in such games (especially if there is too much temporal aliasing to extract intra-frame motion vectors).
For example, here is a "demo scene" video featuring rotazoom (at 2:26 in the video) back in 1996, running on an 8-bit ZX Spectrum computer:
But the entire video is worth watching because it demonstrates other things useful for VR, running on a computer drastrically slower than a Raspberry Pi...
The point is that animating something faster than the framerate creates aliasing, which can confuse perception and make things look like they are moving in the wrong direction (i.e. the wagonwheel effect mentioned above). Added motion blur can help compensate for temporal aliasing as visible in the "extreme case" sample tunnel video in the quoted post above. Motion blur could have eliminated some of the annoying artifacts of bright tunnel details that only appear in a single frame. Or for less processing overhead, the tunnel details could have been artistically adjusted to be more compatible with a lower framerate (lower brightness and contrast on objects that only appear in a single frame. The motion blur could even be backed onto the tunnel wall textures with amounts varying based on speed. Then you could use textures appropriate for your speed (similar to MIP mapping, but speed based instead of distance based).
It is all a matter of your game design taking hardware limitations into account, which IS important. All I was saying is that you CAN use a lower environmental update rate, swapping render rate for more detail, and compensate by morphing the frame buffer to match low latency head tracker data to maximize immersion and minimize "high latency" queasiness.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 1 year ago
- 2 years ago
- 13 days ago