So I've heard multiple times from John Carmack & in the OVROverlay documentation that the timewarp/compositor layers get "raytraced through the lenses, improving the clarity of textures displayed on them."
I tried to read up about rendering using raytracing, but I still have a hard time grasping the way it works with OVROverlay. What exactly is happening during the rendering process when compositor layers are used?
S7 Exynos Nougat. 2017 Gear VR. Public test channel.
No, this is absolutely raytracing, but a very simple one. I used it in "dOculus" long before Oculus VR did this. What you do it, everything needed is tracing a mathematically barrel distorted ray directly to a Quad. Then you will have some UV-Coordinates calculated. Now you can do texture picking directly on the calculated UV coordinates. https://www.youtube.com/watch?v=U1Xp_t9xKko
With a traditional scanline compositing, the Quad would be drawn in the target buffer (an probably get a bit blurry though linear interpolation and possibly mipmapping or alike, also if the buffer is not huge the borders are resolution bound), and then there would be barrel distorted lookup from the compositor introducing a second interpolation making it even more blurry.
Since raytracing needs a chromatic aberration lookup for each color, it is even possible to calculate the lookup with subpixel precision. Since the raytracing calculation for a quad (or a plane) is ending up in not much more than a Dot-Product and Divide per component this is much simpler than multiple drawing and picking needed with a scanlined intermediate buffer. Making it more efficient and thus means preserving energy! Having a crisper and better picture is only a positive side effect 🙂
PS: Thank you John Carmack for presenting this as your idea... fast sqrt anyone?
I don't think they are literally raytraced, but the end result approximates it. I think what JC meant is that they perform the barrel warp and chromatic aberration inverse which yields a result that closely resembles what literal raytracing would do. There's no need to actually trace rays being that the optics and display are fixed, so the warp and aberration inversion coefficients can be represented parametrically and just quickly applied to the submitted eye textures when drawn..
In the documentation it does mention conservation of energy, meaning it's not doing a pure simple bilinear sampling - they're doing some custom trickiness in there to ensure gamma isn't distorted along the way:
EDIT: That's for the PC libOVR, I think with the GearVR it's much more stripped down to just the bare barrel distortion and chromatic aberration inversion, done as cheaply as possible, possibly without the gamma correct filtering? You can trust and believe there's absolutely no sort of raytracing going on with mobile, it's all approximated using conventional rendering techniques.
No, this is absolutely raytracing, but a very simple one. I used it in "dOculus" long before Oculus VR did this. What you do it, everything needed is tracing a mathematically barrel distorted ray directly to a Quad. Then you will have some UV-Coordinates calculated. Now you can do texture picking directly on the calculated UV coordinates. https://www.youtube.com/watch?v=U1Xp_t9xKko
With a traditional scanline compositing, the Quad would be drawn in the target buffer (an probably get a bit blurry though linear interpolation and possibly mipmapping or alike, also if the buffer is not huge the borders are resolution bound), and then there would be barrel distorted lookup from the compositor introducing a second interpolation making it even more blurry.
Since raytracing needs a chromatic aberration lookup for each color, it is even possible to calculate the lookup with subpixel precision. Since the raytracing calculation for a quad (or a plane) is ending up in not much more than a Dot-Product and Divide per component this is much simpler than multiple drawing and picking needed with a scanlined intermediate buffer. Making it more efficient and thus means preserving energy! Having a crisper and better picture is only a positive side effect 🙂
PS: Thank you John Carmack for presenting this as your idea... fast sqrt anyone?
I was waiting for an Oculus rep to provide the official explanation to mark an accepted answer. However, a brief read through your explanation and review of your title makes me confident that your implementation is close enough to what John Carmack ended up doing. The details of how ray-tracing is done in shaders is incredibly enlightening! (Even though I can't follow all of it :)) )
@imperativity
thewhiteambit mentioned that chromatic de-aberration is done using his raytracing method, but John Carmack mentioned on Twitter that Oculus' implementation doesn't do this. If this is due to a perceived design limitation as opposed to a simple optimization, it might be worth connecting the two to discuss whether Ambit's implementation can be applied to Carmack's. Just a thought 🙂
S7 Exynos Nougat. 2017 Gear VR. Public test channel.
@firagabird I think the reason why de-aberration is not done (if it isn't) is simply a performance optimization. Though it can't cost that much since it is only a one component color lookup. If it is on the Gear VR then, hell yes everything is costly on a phone. On a PC GPU the calculation is a joke and the texture lookup has to be done only color component wise - but I don't see chromatic aberration here. The color components might be aligned in a bad order for GPU fetching, but I think there should be a way to have this fast on the caches. Also since Oculus VR uses a pentile matrix display, Red and Blue lookup can be shared between two pixels, reducing them to the half of the lookup each while creating something like ClearType for fonts. But I think this doesn't apply to PC anyway.
I added my old Uber-Shader for download that was used for everything in that demo.
(Including Leap Motion pseudo depth-compositing with a lousy alpha stencil for the hands. Later shamelessly stolen by Leap-Motion after founder and CEO David Holz wrote me in person how great my demo for their competition was and then not even bringing me to the "honorable mentions" - but many lousy Unity3D Demos instead)
The shader might not be the fastest solution, but I guess even I kid could read it 🙂
@Carmack : There is also brash fast ray/sphere-intersection test in the shader, I haven't seen it anywhere else. I used it to paint the fingertips. Being reduced to the bare minimum, the test can not even say where a sphere has been hit, but only that is has been hit. Good enough for the worlds fastest BVH I guess 😉