Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
candiedbug's avatar
candiedbug
Honored Guest
13 years ago

Hardware frame interpolation.

I have a question for the hardware specialists here in the forums, would adding a hardware frame interpolator to smooth out frame-rate instability introduce noticeable lag?

8 Replies

  • Probably. In order for the interpolator to work it would have to have both the starting frame and the ending frame in order to interpolate between them. That means that instead of displaying your frame B after frame A, you're delaying it so that you can render the interpolated frame I, and then waiting another frame to render the B.

    Frame interpolation works great if you've got a guaranteed stream of frames arriving at a fixed rate and can therefore buffer 'future frames' so you can add the interpolation. Since with the Rift future frames depend on future head positions that can't be predicted outside of a very narrow window, all you end up doing is making existing timing problems even harder, and any smoothed motion you get as a result would probably only make it more jarring when you miss your window to render a new frame, because now instead of being 1 frame late, you're at least 2.
  • spyro's avatar
    spyro
    Expert Protege
    Not so fast, the idea is not bad at all. In fact, it can reduce the (perceived) latency. Dmitry Andreev (Lucas Arts) showed a incredible working concept of a realtime framerate upscaler 3 years ago at the Siggraph 2010. Full presentation:
    http://and.intercon.ru/rtfrucvg_html_slides/

    The trick behind that, is that the in-between-frame is calculated on the base of the current displayed frame and the next frame while it's still in production:



    So you see your actions in fact earlier on the screen than without interpolation! When interpolating from 60 to 120 fps you actually gain 8 ms!

    Short prototype demo (XBox 360, 30 to 60 fps): http://and.intercon.ru/videos/rtfrucvg_part2_live_x360_h264.avi

    spyro
  • Interpolating while the frame is still being built! Wow! Didn't think of that! Thanks for the reminding me of this article, I had seen it a while back but I assumed they still had some sort of lag. Now, I am wondering if interpolation on a per-object basis is the way to go, as they say in the article, all you need to do is feed the motion vector to the interpolator since that can be reasonably predicted for small slices of time. Could this be used to compensate motion burring as well? Perhaps pre-light a pixel taking into consideration panel/eye image retention?
  • spyro's avatar
    spyro
    Expert Protege
    IHMO this could really be one part of a solution to the motion blur problem.

    What's really important about motion blur with HMDs is, that it happens when you focus a fix point in your virtual world as you move your head at the same time. That's exactly what happens in reality if you do so. You don't stare straight along your view center and move your head along. So why is that a problem? It's because the image will stay on it's place in your visual field for full 16 ms and that "smears" the picture over your retina against the direction of your head movement.

    So we basically need:
    - A low persistence display with 120 Hz
    - A strobing backlight
    - A system which calculates 120 fps at Full-HD at least...

    The last part ist the important one. Why do we need 120 fps to eliminate motion blur? Well, it's clear, that 120 fps reduces 8 ms of latency, but's that's not the point here. The approach with a strobing backlight is to show the picture only - say - 1-2 ms (but much brighter than before) and keep the display pitch black the rest of the time. Even when you move your head like before, the image is simply not long enough visible to trigger the same effect on your retina as before.

    The only bad thing about that is awful 60Hz flickering in your whole FOV which would be not acceptable. So why not simply double the frequency of the backlight to 120 Hz and show the same picture twice? In this case, the flickering would be eliminated but the picture would now be visible at two different locations at once while you move your head => very heavy ghosting. That's the reason, why we need 120 fps to eliminate motion blur, every picture that hit's your retina should be visible very short and should be different from each other.

    Note that we don't actually need only 60-70 fps for fluid motion, so half of the frames would in fact only be calculated because of the motion blur problem. A modern PC < 1200 $ or so simply cannot deliver stable 120 fps when running modern games like Crysis 3 at Full-HD+. At least, not with brute force technology like we use today for the whole FOV (even the parts which you can not focus on your peripheral view).

    We will need some clever algorithms (like John Carmacks 'timewarping' approach) where we synthesize frames on 2D-basis at a (much) higher framerate than the game runs internally (use latest native image and warp it according to the current motion tracker orientation data just before the scanout). Of course, on the edge of the screen in the direction of movement there is a area which just lacks the information of the next frame (the part of the world there is just not visible yet). But as you can barely see that parts at the edge of your view this isn't really a problem and can just be extrapolated from the inner pixels.

    spyro

    PS: Maybe this article helps you better to understand than try (it's even written in proper english ;)): http://www.avsforum.com/t/1484182/why-we-need-1000fps-1000hz-this-century-valve-software-michael-abrash-comments
  • "spyro" wrote:


    The last part ist the important one. Why do we need 120 fps to eliminate motion blur? Well, it's clear, that 120 fps reduces 8 ms of latency, but's that's not the point here. The approach with a strobing backlight is to show the picture only - say - 1-2 ms (but much brighter than before) and keep the display pitch black the rest of the time. Even when you move your head like before, the image is simply not long enough visible to trigger the same effect on your retina as before.


    PS: Maybe this article helps you better to understand than try (it's even written in proper english ;)): http://www.avsforum.com/t/1484182/why-we-need-1000fps-1000hz-this-century-valve-software-michael-abrash-comments


    What about using a variable timing led array backlight and gaze detection and only process the area on the screen that would land on the fovea and its immediately adjacent area? I know that the edges of the retina are very sensitive to movement, but while the sensitivity is high, the contextual information capture of that area is rather low, so one could get away with a lot more motion blurr there than on the foveal zone.

    PS. I read that article by Mr Abrash, great stuff, if a bit disheartening, I wonder how far 1khz displays are. Or maybe we can get around the issue with clever tricks. Come to think of it, with clever temporal interpolation, a GPU would not have to maintain a framerate anywhere near 1khz and still look like 1khz to the eye. I mean, there is so much visual information that the brain discards anyway, the trick is finding out what/when to fudge detail.
  • Hi. This is a bit off-topic, but I didn't figure it was worth its own thread.

    Is there any value to having a "variable persistence" display? Where the persistence of each pixel could be varied independent of its neighbors? Would that buy us anything?

    It seems like an OLED display, with appropriately hot-rodded driver hardware, could do such a thing. Maybe. No backlight, just independently(?) addressable pixels that could be told to either flash briefly and brightly, or stay lit longer and more dimly. You'd also need some clever way for the GPU to specify this behavior. <waves hands>

    Assuming all that was possible, would it help? Is there a way to decide/guess what sort of persistence will look good (not smeared, not strobed) ahead of time, without knowing what the user's eyeballs are doing? (We would however know what the user's head was doing.) Perhaps objects that are moving in relation to the background should have more or less persistence? Or perhaps the motion of objects with respect to the user's head orientation should determine their persistence. Would trying to guess which object the user is actually tracking be worth the attempt? (Say, assume he's locked onto the scary monster he's fighting, or just give special treatment to objects in the user's foreground and/or directly "ahead" of his head.)

    Just running that up the flagpole to see if anybody salutes...
  • "Dave" wrote:
    Hi. This is a bit off-topic, but I didn't figure it was worth its own thread.

    Is there any value to having a "variable persistence" display? Where the persistence of each pixel could be varied independent of its neighbors? Would that buy us anything?

    It seems like an OLED display, with appropriately hot-rodded driver hardware, could do such a thing. Maybe. No backlight, just independently(?) addressable pixels that could be told to either flash briefly and brightly, or stay lit longer and more dimly. You'd also need some clever way for the GPU to specify this behavior. <waves hands>

    Assuming all that was possible, would it help? Is there a way to decide/guess what sort of persistence will look good (not smeared, not strobed) ahead of time, without knowing what the user's eyeballs are doing? (We would however know what the user's head was doing.) Perhaps objects that are moving in relation to the background should have more or less persistence? Or perhaps the motion of objects with respect to the user's head orientation should determine their persistence. Would trying to guess which object the user is actually tracking be worth the attempt? (Say, assume he's locked onto the scary monster he's fighting, or just give special treatment to objects in the user's foreground and/or directly "ahead" of his head.)

    Just running that up the flagpole to see if anybody salutes...


    Well there are gaze detection systems available, the problem is they are somewhat high latency and that pretty much defeats the purpose. Ultimately, I do believe "variable persistence", as you call it, is the future (at least for the foreseable time), until 1khz displays are available, or maybe not. Aren't there 600hz TVs out already?