Forum Discussion
JMF
11 years agoHonored Guest
Barrel distortion NEW SOLUTION MADE!!!
I would like to point out main disadvantages of current barrel distortion solution know as "pixel mapping":
• Black borders around image (21% of screen is not in use)
• higher resolution (than screen) buffer slows down FPS
• current distortion is post-effect
Now that you know this mystery I can describe to you the solution that I invented.
Basically lets see how 3D objects are projected on flat "screen"

This is oversimplificated presentation
To create distorted image, projection on curved surface can be done, but you must know that computer sees single
polygons as projected objects...
and wall divided by 3 polys looks like this:


but... we can cheat it!
What is the difference between flat projection and curved projection?
The distance between focal point and the pixel! :D
simple presentation –>
Moving focal point for every pixel creates curved image.
How to achieve it?
Assume that you have 1920x1080 screen and 110 fov:
Xres = 1920
Yres = 1080
FOV_H = 110
Distance between pixel and focal point for left/right middle edge pixel is:
DistancePixToFocal = 960/ sin( FOV_H / 2 )
Focal4ThisPix = sqrt( DistancePixToFocal**2 - DistanceFromCenter**2 )
Focal points values can be calc to file and then readed when needed ;)
Aspherical distortion for Oculus Rift
Graph shows different projections for pixel distance.
Changing DistancePixToFocal to the value from graph creates flowing effects:
This graphs represent relationship between distance from pixel to center of the screen and pixel to focal point distance

If you like it please leave the comment...
I'm already making code for this
EDIT
I changed links to images ;)
• Black borders around image (21% of screen is not in use)
• higher resolution (than screen) buffer slows down FPS
• current distortion is post-effect
Now that you know this mystery I can describe to you the solution that I invented.
Basically lets see how 3D objects are projected on flat "screen"

This is oversimplificated presentation
To create distorted image, projection on curved surface can be done, but you must know that computer sees single
polygons as projected objects...
and wall divided by 3 polys looks like this:


but... we can cheat it!
What is the difference between flat projection and curved projection?
The distance between focal point and the pixel! :D
simple presentation –>

Moving focal point for every pixel creates curved image.
How to achieve it?
Assume that you have 1920x1080 screen and 110 fov:
Xres = 1920
Yres = 1080
FOV_H = 110
Distance between pixel and focal point for left/right middle edge pixel is:
DistancePixToFocal = 960/ sin( FOV_H / 2 )
Focal4ThisPix = sqrt( DistancePixToFocal**2 - DistanceFromCenter**2 )
Focal points values can be calc to file and then readed when needed ;)
Aspherical distortion for Oculus Rift
Graph shows different projections for pixel distance.
Changing DistancePixToFocal to the value from graph creates flowing effects:
This graphs represent relationship between distance from pixel to center of the screen and pixel to focal point distance

If you like it please leave the comment...
I'm already making code for this
EDIT
I changed links to images ;)
15 Replies
- geekmasterProtegeThis "new" method you invented has been a standard (commonly used for domed screens) for many years:
https://developer.oculusvr.com/forums/viewtopic.php?f=33&t=965&p=11107#p11107
Sharing your implementation is of great service to the VR community. Thanks! - MrMonkeybatExplorerAs well as the distortion there is also the chromatic aberration which is harder to correct for with something other than a post processing effect, unless we used something like sub pixel rendering. Another advantage of post processing distortion is you can do timewarp at the same time. Hmm I wonder if it would be possible to create an achromatic lens and render with the same spherical distortion so the timewarp could be by shifting the image without any extra distortion steps.
- FredzExplorer
"mrmonkeybat" wrote:
As well as the distortion there is also the chromatic aberration which is harder to correct for with something other than a post processing effect, unless we used something like sub pixel rendering.
If you ray-trace for distortion you can do it for the three different wavelengths as well to account for chromatic aberration. There is no need for a post processing effect.
The difficulty in the ray-tracing approach is to come with the correct parameters for the aspheric surface lens profile and to implement intersection and normal calculations for aspheres which is not trivial (iterative solution, but can be pre-processed). - steveHonored Guest
"Fredz" wrote:
If you ray-trace for distortion you can do it for the three different wavelengths as well to account for chromatic aberration.
I wanted to use a mesh to overcome the slowness of the Raspberry Pi rendering, and it was the chromatic aberration that convinced me that it wasn't possible. It isn't a matter of drawing three lines at different wavelengths.
Red, Green and Blue smear out across multiple pixels with fractionally different smear amounts. Trying to duplicate the smear with lines would require drawing an "affected area" hundreds of pixels thick for every single pixel.
So the approach of pixel shaders for an input texture mapped to an output texture is as much of a requirement as the aberration effect of the physical lenses themselves. I think Carmack already gave up on meshes? - JMFHonored Guest
"geekmaster" wrote:
This "new" method you invented has been a standard (commonly used for domed screens) for many years:
https://developer.oculusvr.com/forums/viewtopic.php?f=33&t=965&p=11107#p11107
Sharing your implementation is of great service to the VR community. Thanks!
You are missing the hole thing. This "moving Focal point" method is not a ray-tracing thing, its more like dividing the screen in tiny rings and using different camera settings for each. I think this link could misled you:
, here is updated version
I'm already making it work in Maxon Cinema 4D as Python plugin."mrmonkeybat" wrote:
As well as the distortion there is also the chromatic aberration which is harder to correct for with something other than a post processing effect, unless we used something like sub pixel rendering. Another advantage of post processing distortion is you can do timewarp at the same time. Hmm I wonder if it would be possible to create an achromatic lens and render with the same spherical distortion so the timewarp could be by shifting the image without any extra distortion steps.
Yes, you have right. Best way for chromatic aberration is the post effect -pixel mapping. But it can by done better 8-)
Currently RGBY channel shift is used, but real chromatic aberration (produced by OR-VR lens) has smooth transition between colors. By using Pixel-color-hue to shift-multiplier graph the program can read value of how much he must move each pixel to get perfect correction.
- 2EyeGuyAdventurerThere are only 3 possible pixel hues: red, green, and blue.
- JMFHonored Guest
"2EyeGuy" wrote:
There are only 3 possible pixel hues: red, green, and blue.
That's why graph is needed, if you have pixel in R:G:B and its 255:255:0 (basically yellow ;) ) or 128:128:0 (dark yellow) program can read offset value corresponding to this color on the graph. As you can see on this sample graph blue colored pixel would have more offset than yellow
Here is relationship between the "hue" of colors with maximal saturation in HSV and HSL with their corresponding RGB coordinates.
Chromatic aberration occurs because lenses made out of glass have different magnification "power" for each color hue.
Mapping pixels by their color hue is perfect solution for chromatic aberration correction, but with pincushion distortion correction its like double mapping. I think that making barrel distortion at the rendering time by maneuvering focal point is the fastest solution. - bluenoteExplorer
"JMF" wrote:
You are missing the hole thing. This "moving Focal point" method is not a ray-tracing thing, its more like dividing the screen in tiny rings and using different camera settings for each.
But how do you want to do that on fragment level? Typically you convert from world coordinates to clip coordinates in the vertex shader and the fragment shader just loops over all pixels of each triangle. Using a different focal point per pixel means that you can no longer just transform the vertices of a triangle globally. And varying the focal point does not result in an affine transformation, so there is not straightforward way to do that with rasterization. There is actually a paper which (iirc) also approaches distortion on the transformation level (so it is similar to your varying focal point) and tries to solve the issues of the non-affine transformation by GPU based tesselation (better read it, I'm not sure if I remember that correctly). - AntDX316Honored Guestthe best way for DK2 to work with all games is the ability to have NATIVE DK2 head tracking and correct image mirroring in ALL games
the foundation is there its just the way the camera should be for DK2 to work, like the track doesn't exist : (
it would be so simple if people really cared enough
its like trying to find aftermarket Daewoo Leganza parts but only a few exist like the lowering springs but not really anything else where as Honda Civics, you can find parts for EVERYTHING and it's so cheap - JMFHonored Guest
"bluenote" wrote:
"JMF" wrote:
You are missing the hole thing. This "moving Focal point" method is not a ray-tracing thing, its more like dividing the screen in tiny rings and using different camera settings for each.
But how do you want to do that on fragment level? Typically you convert from world coordinates to clip coordinates in the vertex shader and the fragment shader just loops over all pixels of each triangle. Using a different focal point per pixel means that you can no longer just transform the vertices of a triangle globally. And varying the focal point does not result in an affine transformation, so there is not straightforward way to do that with rasterization. There is actually a paper which (iirc) also approaches distortion on the transformation level (so it is similar to your varying focal point) and tries to solve the issues of the non-affine transformation by GPU based tesselation (better read it, I'm not sure if I remember that correctly).
You pointed right thing. This method would work only in ray tracing :(
But I found interesting publication on this site
Here are some pics

Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 18 days ago