Forum Discussion
Vrally
13 years agoProtege
Connection between render target and distortion scale?
Hi,
While working a bit with the distortion scale implementation in my code I tried to understand the coupling between the distortion scale value and the size of the render target.
The Rift has a native resolution of 1280x800 but since we are doing barrel distortion the documentation suggests scaling up the render target to 1600x1000 (i.e. increase by 25%). But how is this value related to the distortion scale? When looking through the code in the Oculus SDK demos I see no direct connection between render target size and distortion scale. The value you get from Oculus SDK distortion scale calculation is nowhere near 25% (The actual value is about 1.71). And I understand that we get a larger value because the lenses are not positioned in the centre of the screen (lensOffset).
But how is the optimal render target size calculated? I.e. how do we know from the distortion value of 1.71 that we need a render target that is 25% larger than the native resolution?
While working a bit with the distortion scale implementation in my code I tried to understand the coupling between the distortion scale value and the size of the render target.
The Rift has a native resolution of 1280x800 but since we are doing barrel distortion the documentation suggests scaling up the render target to 1600x1000 (i.e. increase by 25%). But how is this value related to the distortion scale? When looking through the code in the Oculus SDK demos I see no direct connection between render target size and distortion scale. The value you get from Oculus SDK distortion scale calculation is nowhere near 25% (The actual value is about 1.71). And I understand that we get a larger value because the lenses are not positioned in the centre of the screen (lensOffset).
But how is the optimal render target size calculated? I.e. how do we know from the distortion value of 1.71 that we need a render target that is 25% larger than the native resolution?
16 Replies
- dghostHonored GuestThe default distortion scale value (~1.71) is how much it takes to scale the post-distortion image so that it fills the entire width of the screen. You can use lower values or higher values if you want, however you generally can't see any difference when looking through the Rift with values larger than about 1.25.
To illustrate, here is an image that uses the default distortion scale value of 1.71:
And here is one that uses a distortion scale of 1.25:
Please note, however, that the distortion scale also impacts the FOV that you use to render the scene. If you are not using the default value, you either need to use the OVR::Util::Render::StereoConfig class to get a new distortion scale/FOV combination, or you need to calculate the FOV by hand taking into account the new distortion scale value.
As far as how it relates the the render target, you need to scale the size of the render target by the distortion scale value in order to ensure that each pixel on the screen can map to a unique pixel in the off screen buffer. So, for a 1920x1200 framebuffer with a distortion scale of 1.25 you wind up with a render target size of 2400x1500. - VrallyProtegeWell, I know all of this. But it doesn't answer my question. My question is how the maths between the suggested 25% increase of render target size and the distortion scale value of 1.71 related?
More specifically: How have Oculus Team decided that 25% increase of render target size is needed? Or is this a value that is guesstimated to give approximate 1:1 pixel ratio in the center of the image between the render target and the final image after barrel distortion is applied? - dghostHonored GuestThe 25% value is entirely arbitrary and is only an example. From page 31 of the SDK documentation:
The simplest solution is to increase the scale of the input texture, controlled by the Scale variable of the distortion pixel shader discussed earlier. As an example, if we want to increase the perceived input texture size by 25% we can adjust the sampling coordinate Scale by a factor of (1/1.25) = 0.8. Doing so will have several effects:
• The size of the post-distortion image will increase on screen.
• The required rendering FOV will increase.
• The quality of the image will degrade due to sub-sampling from the scaled image, resulting in blocky or blurry pixels around the center of the view.
Since we really don’t want the quality to degrade, the size of the source render target can be increased by the same amount to compensate. For the 1280 × 800 resolution of the Rift, a 25% scale increase will rquire rendering a 1600 × 1000 buffer. Unfortunately, this incurs a 1.56 times increase in the number of pixels in the source render target. However, we don’t need to completely fill the far corners of the screen where the user cannot see. Trade-offs are evident between the covered field of view, quality, and rendering performance.
The important part is that you need to scale the frame buffer by the same value you provide to the distortion shader and that you use to adjust the FOV. This gives an approximate (not quite exactly due to non-integer scaling) 1:1 mapping between the off-screen buffers and the final image. So long as you use the same value in all three places it doesn't matter which value you use - it can be the 1.71 value provided by the SDK or 1.25. - jhericoAdventurer
"pixelminer" wrote:
Well, I know all of this. But it doesn't answer my question. My question is how the maths between the suggested 25% increase of render target size and the distortion scale value of 1.71 related?
They aren't related. The scaling factor is part of the transformation applied to the texture coordinates when rendering the final target to the screen with distortion. However, at the point where you're passing in the coordinates to OpenGL or DirectX, the actual texture coordinates should be in the 0-1 range, so the actual dimensions of the target don't have an impact. Generally speaking you should make sure your target has the expected aspect ratio, but that's all.
I cover the topic of the transformation of the coordinates, including the scaling factor, in a blog article here"pixelminer" wrote:
More specifically: How have Oculus Team decided that 25% increase of render target size is needed?
The value is determined by finding the amount of distortion applied at a fit point (defaults to the left edge of the screen on the Dev kit), and the computing the scaling factor so that the post-distortion image touches that point. You can see the exact calculation here and it's apparent that the actual value isn't estimated in any way. It's exactly determined using the same distortion function that is used in the fragment shader. - VrallyProtege
"dghost" wrote:
The 25% value is entirely arbitrary and is only an example. From page 31 of the SDK documentation:
The important part is that you need to scale the frame buffer by the same value you provide to the distortion shader and that you use to adjust the FOV. This gives an approximate (not quite exactly due to non-integer scaling) 1:1 mapping between the off-screen buffers and the final image. So long as you use the same value in all three places it doesn't matter which value you use - it can be the 1.71 value provided by the SDK or 1.25.
So the optimal off-screen buffer size for the Rift is a actually to scale up the native Rift resolution by 71%, ie. from 1280x800 to 2194x1371 pixels??? (non-interger scaling issues overlooked) - VrallyProtege
"jherico" wrote:
"pixelminer" wrote:
Well, I know all of this. But it doesn't answer my question. My question is how the maths between the suggested 25% increase of render target size and the distortion scale value of 1.71 related?
They aren't related. The scaling factor is part of the transformation applied to the texture coordinates when rendering the final target to the screen with distortion. However, at the point where you're passing in the coordinates to OpenGL or DirectX, the actual texture coordinates should be in the 0-1 range, so the actual dimensions of the target don't have an impact. Generally speaking you should make sure your target has the expected aspect ratio, but that's all.
But doesn't a higher distortion scale imply greater "zooming" into the off-screen frame buffer? Lets say that I would use a off screen frame buffer matching the Oculus Rift's native resolution. The pixels would be more stretched in the final image if I use a distortion scale factor of 1.71 compared to if I would to use a distortion factor of 1.25. Or have I completely misunderstood the concept of barrel distortion? - dghostHonored Guest
"pixelminer" wrote:
"dghost" wrote:
The 25% value is entirely arbitrary and is only an example. From page 31 of the SDK documentation:
The important part is that you need to scale the frame buffer by the same value you provide to the distortion shader and that you use to adjust the FOV. This gives an approximate (not quite exactly due to non-integer scaling) 1:1 mapping between the off-screen buffers and the final image. So long as you use the same value in all three places it doesn't matter which value you use - it can be the 1.71 value provided by the SDK or 1.25.
So the optimal off-screen buffer size for the Rift is a actually to scale up the native Rift resolution by 71%, ie. from 1280x800 to 2194x1371 pixels??? (non-interger scaling issues overlooked)
There is no "optimal" scaling value - it all comes down to preference and performance balancing. Again, the most important part is that the values are consistent in all places.
Personally, I prefer values > 1.25 because I can see the inside edges of the rendered image otherwise - going with 1.25 gives a decent buffer that makes it really hard for you to see the edges of the screen regardless of how the Rift is positioned on your head. With the Rift DK, however the lenses make it nearly impossible to actually see the outside edges so rendering screen edge to screen edge is of questionable value. If you compare how much of the image is visible when looking through the Rift vs how much is visible on screen the difference is quite shocking.
Given that the HD prototype they've demoed (supposedly) uses different optics I wouldn't be surprised if it becomes more important to use the default values, though. - dghostHonored Guest
"pixelminer" wrote:
But doesn't a higher distortion scale imply greater "zooming" into the off-screen frame buffer? Lets say that I would use a off screen frame buffer matching the Oculus Rift's native resolution. The pixels would be more stretched in the final image if I use a distortion scale factor of 1.71 compared to if I would to use a distortion factor of 1.25. Or have I completely misunderstood the concept of barrel distortion?
Short answer: no, it doesn't imply greater zooming if you use the same value.
Long answer: the distortion scale value effectively increases the physical area that your render target occupies on the screen after distortion, which means that the resolution also has to be increased in order to maintain the same pixel density. The FOV also has to be adjusted to account for this increased physical area. - jhericoAdventurer
"pixelminer" wrote:
But doesn't a higher distortion scale imply greater "zooming" into the off-screen frame buffer? Lets say that I would use a off screen frame buffer matching the Oculus Rift's native resolution. The pixels would be more stretched in the final image if I use a distortion scale factor of 1.71 compared to if I would to use a distortion factor of 1.25. Or have I completely misunderstood the concept of barrel distortion?
A higher distortion scale would push the boundaries of the image further out, but bear in mind you're not supposed to just pick a distortion scale. You need to either use the SDK distortion scale or make the equivalent calculation in your own code.
What isn't super obvious is that the distortion scale and the FOV are linked. The SDK works from the assumption that you want to specify the fit point and then derive the FOV and distortion scale from that. If you use a bigger distortion scale you have to use a correspondingly bigger FOV, which ends up meaning that more information actually gets pushed out to the edges of the texture and you have less resulting detail in the center.
It would probably be more intuitive if the SDK let you work in either direction... i.e. given this FOV what should my distortion scale be and given this distortion scale / fit point, what should my FOV be, but it only works the second way.
If you pick a distortion scale that other than what the SDK computes for you, but don't change other elements like the corresponding FOV, then your resulting image will be wrong, albeit perhaps only imperceptibly. - jhericoAdventurerSorry, we may be talking at cross purposes here. Scale is being used in two contexts in this conversation.
- The difference in size between the off-screen texture being used as the source image for the barrel warp transform
- The scaling amount applied to a computed texture coordinate to ensure that the rendered image covers the appropriate amount of the physical screen.
I'm pretty much talking about the second one. It's tied to the final field of view you want the wearer to have in the VR environment.
The first one is tied to the quality of the final rendered image, but is only one of a number of factors to do so. Also important are whether you're using mipmapping or other multi-sampling mechanisms. Doubling the size of your input texture doesn't automatically improve the detail, and in fact it could actually worsen your image due to aliasing artifacts, so multi-sampling and mipping probably have a bigger impact on the final quality than how big your offscreen buffer is. I just tend to lock it to the next highest power of 2, which in the case of the Rift ends up being 1024x1024 (because I have one offscreen buffer for each eye, so my eye render target is actually 640x800).
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 4 years ago