cancel
Showing results for 
Search instead for 
Did you mean: 

Increasing Perceived Resolution

B0RGS
Honored Guest
Hey all, long time lurker here...

You’ve likely noticed that perceived resolution seems to improve while you move your head around with your eyes fixed at a point in a VR scene. While constantly thrashing your head about as if you’re in a Death metal band isn’t a practical means of maintaining this perceptual bump, I’ve been contemplating another way to take advantage of this phenomenon. Before reading on, bear in mind that I know almost nothing about rendering. For all I know this may be something that has already been tried within the context of games.

So…through a bit of experimentation I have found that by sampling a 4k image ( frame), to generate 4 separate 2k images(subframes), and quickly cycling between them, you can improve the quality of the perceived frame. To test this, I essentially grouped 4 pixels of a 4k image by their 2k equivalents in gimp, then copied one the subpixels (i.e. top left, top right ect.) and pasted it onto the other three quadrants. The quandrant occupied by the subpixel copied over was kept consistent with every pixel in a given subframe. I then repeated this process to create subframes for each of the other quadrants.

I then used a trial version of Animate CC (Adobe Flash Pro) to create animations using those subframes.

I have two animations that represent one perceived frame, both cycling between subframes at 60 total subframes per second (sfps). One is comprised of all four subframes, and the other is made from only two. This yields 15 and 30 fps for each of the subframes. As you would expect, lower fps causes more apparent flicker and jitter. Though, I feel that even at 60 sfps, the flicker and jitter on the 2 subframe animation is approaching acceptable levels.

I’d love to test higher overall frame rates, but I think flash players are capped at 60 fps, and I don’t have the technical knowhow, or software /hardware, to make/observe higher sfps examples.

To view the animations, open the flash files in your browser. I recommend that you put your browser in full-screen mode, so the pixels line up correctly (if you have a 1080p or 4k display). If you right click the animation and deselect "play", you’ll be able to see one of the 2k subframes. Make note of the difference in aliasing. My browser seems so have some issues constantly maintaining 60 fps, so you may see some additional intermittent jitter.

I’m not certain as to how the subframes are being perceived by the mind. I’d guess that the subframes are cycling at a rate such that that the brain perceives their average. If that is the case, then this technique amounts to an alternative to super sampling. With the advantage being that it isn’t necessary to render at resolutions higher than the display. In other words, some computer processing power is effectively offloaded onto your own brain.

If you already possess a 4k display, I imagine that you could still reap performance benefits using this technique without any loss to perceived resolution. In such a case, the subframes would instead be displayed using their respective pixels, while the others remain off. Here I imagine that instead of the brain just perceiving the average of the four subframes, it would perceive their composite. The perceived brightness of the display would likely take a huge hit though, as the mental averaging of pixel colors would still occur; It just that in this case, each pixel color would be averaged with 3 black(off) pixels for each perceived frame.

I’m also unsure as to how motion would impact things. While I refer to a perceived frame as being comprised of “subframes”, the “frame” isn’t necessarily comprised of subframes rendered from a scene at the same point in time. Meaning that any motion that occurs while each subframe is rendered would be captured in the next subframe, potentially screwing with the quality of the perceived frames. Maybe you could spit out each frame’s subframes while maintaining the same scene, but then you might as well just render at higher resolutions, as I imagine that the perceived fps would then match the much lower fps of any given subframe.

With displays capable of running at 120 hz, maybe motion that occurs between subframes would generally be small enough to reduce the differences between the scenes captured by each subframe such that they would still be representative of the same higher res frame equivalent.

Like I said, I know next to nothing about rendering. Is it even possible to render “subframes”, as I have described them, without having to first render the higher resolution “frame”?
3 REPLIES 3

galopin
Heroic Explorer
With temporal antialiasing, sub pixel camera offset is not infrequent, the mess start when you start dealing with ghosting everywhere, from camera movement, dynamic objects, ...

It is not impossible, but never perfect, to have a motion vector surface helps, but again, never perfect.

B0RGS
Honored Guest
I just found a nearly year old Tweet from Palmer Luckey himself regarding exactly what I'm talking about! Guess I need to get with the times....

So even though this sort of technique seems to perform poorly with movement, is/can it be applied to only certain elements in a scene? Take for instance a virtual desktop screen, HUD elements, anything with text?

B0RGS
Honored Guest
"galopin" wrote:
With temporal antialiasing, sub pixel camera offset is not infrequent, the mess start when you start dealing with ghosting everywhere, from camera movement, dynamic objects, ...

It is not impossible, but never perfect, to have a motion vector surface helps, but again, never perfect.


Sorry for the double post, but there is one more thing I'd like clarification on.

I looked into Temporal AA and it sounds like its slightly different to what I initially decribed. With TAA, a newly rendered frame, that is offset by a subpixel, is combined with the previously rendered frame before it is displayed.

If this is the case, than ghosting/smearing would be inherent to the technique. In other words, a "ghost" of the last frame will always exist within the current. Am I understanding this correctly?

By only displaying the newly rendered frame as I proposed, you might break the tie to the old frame and therefore alleviate a lot of that ghosting. However, this introduces greater potential for perceived flicker and jitter. This is where the higher refresh rate of the rift may be of use.