Forum Discussion
mattnewport
11 years agoProtege
Techniques for improving text quality?
John Carmack has mentioned in some of his Oculus talks that there are ways to get higher quality text display on an HMD by rendering it separately from the rest of the scene. I haven't found any details about how to go about doing this though. Does anyone have any links or know any more details on what techniques he's talking about?
8 Replies
- g4cExplorerI think he meant: render into a texture that is applied to a surface (will be the case anyhow in most cases) and if possible use good super sampling, so if you say have a quad that will never span more than say 500 pixels, make the texture is 1024^2 or more. Then set the materials texture filtering to use mipmaps like trilinear. All of this will cause more information to be placed into the aliased edge pixels, resulting in clearer text.
The font texture atlas itself, make sure it is originally generated with some good aliasing, not binary pixels.
Also If possible use text FG/BG color that contains fairly even RGB values (black ... grey ... white ideally), not pure red for example, this will also give smoother aliased edges as all RGB screen elements will be used.
And of course try not to use a font that's real skinny, or if you do make sure that it's large enough such that the line width occupies 2 or more pixels.
And then try and keep the text surface nearly perpendicular to the camera. - mattnewportProtegeI think that advice will help for rendering text 'conventionally' to your regular eye buffers. I think he was talking about something a bit different though, similar to what he does for panoramas and virtual cinema on Gear VR, where the rendering is done to a different buffer and using some techniques (this is where I'm unclear on the details) to get better effective resolution by taking advantage of the fact that you're dealing with simple geometry, possibly by doing custom distortion/chromatic aberration correction rather than relying on the regular distortion.
One easy tip I picked up from one video was to render text as pure green on a mostly black background (like an old school terminal). For my current use case (an in game debug menu / console that ideally needs a lot of dense small text to be legible) I've found this helps. It has two benefits, first OLED panels have higher effective resolution for green than for red and blue due to the pentile display and second there's no chromatic aberration artifacts because you're dealing with green only. This trick is probably going to be less relevant for end user facing text but could still be useful in some situations. - g4cExplorerI got from it that he was talking about pre-rendering into a "pixelmap" that is larger than the 3D surface and then set good aliasing for blitting to the surface. I suppose this could be described as "rendering into an offscreen buffer".
When a surface is moving (it mostly is in VR, even when sitting still your wandering > +/- 1pixel) then it will convey more sub pixel information than would be present in a surface with a <= 1:1 texture. The brain is able to temporally interpolate subpixel info from the alias dynamics.
hmmm... So you speculate he might have meant: doing it closer to the screen than the engine (not passing it through engine cameras), but maybe writing it into the screen buffer after the engines done its work? - ZeroWaitStateHonored GuestLayers
Similar to the way a monitor view can be composed of multiple windows, the display on the headset can be
composed of multiple layers. Typically at least one of these layers will be a view rendered from the user's virtual
eyeballs, but other layers may be HUD layers, information panels, text labels attached to items in the world,
aiming reticles, and so on.
Each layer can have a different resolution, can use a different texture format, can use a different field of view or
size, and might be in mono or stereo. The application can also be configured to not update a layer's texture if
the information in it has not changed. For example, it might not update if the text in an information panel has
not changed since last frame or if the layer is a picture-in-picture view of a video stream with a low framerate.
Applications can supply mipmapped textures to a layer and, together with a high-quality distortion mode, this
is very effective at improving the readability of text panels. - mattnewportProtegeHe talks a little bit about it in his GDC keynote (around 55:42, link should go directly to the right point). He doesn't give a whole lot of details unfortunately.
- mattnewportProtegeMore discussion of this technique in his Oculus Connect Keynote at around 33:09. I think to do this properly you need to account for the optics (similar to what the regular distortion path does) and unfortunately there's a dearth of documentation on that from Oculus, especially now with the move away from application distortion rendering. I think the code is still included in the full source SDK download though so it should be possible to reverse engineer some of what's required from there. I haven't looked at the mobile SDK but maybe there's some example code in there of applying this technique.
- brantlewAdventurerYes, layers is the way to go. If you're talking about PC, then the 0.6 SDK has significant support for this and several samples demonstrating this technique.
- mattnewportProtege
"brantlew" wrote:
Yes, layers is the way to go. If you're talking about PC, then the 0.6 SDK has significant support for this and several samples demonstrating this technique.
I've already been experimenting a bit with layers in 0.6.0 but they don't seem to be the full story to implement the techniques mentioned in the videos. My current understanding is that Quad layers still have distortion applied to them (though the docs don't specifically say so) and based on the HSW display issues I'm seeing, QuadHeadLocked layers at least are unusable for UI as currently implemented. I think it's going to be important to be able to properly integrate layers with scene depth for many of the use cases for this too which is not directly supported by the current layer APIs.
Direct layers would allow you to implement the custom distortion techniques mentioned in the videos but without some documentation on the details of the distortion caused by the optics and how to correct for it that's going to be difficult at the moment. If I find time I'm going to try and reverse engineer it from the SDK source and start experimenting but it would be a lot easier / quicker if we had some good documentation on the DK2 optics.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 3 years agoAnonymous
- 6 months ago