Hi, I'm trying to make an UI which contain text. I can not manage to get a great render "at all distance". With the canvas scaler and "dynamic pixels per unit" parameter, I have a clean render, but if I take away the UI or if I move with the tracker/room scale, the text become dirty: unreadable which is normal but also sparkling. Is there a way to display clean text, able to move in space ?
Text rendering in VR is tricky. There are a few other threads on this, such as https://forums.oculus.com/community/discussion/comment/308094 and https://forums.oculus.com/developer/discussion/31204/whats-the-status-of-reading-static-text-in-oculus-rift. Here are a few things that can help: 1) Enable anti-aliasing. 4X MSAA often works well on modern hardware. This will only work if you're using the forward rendering path. 2) Increase UnityEngine.VR.VRSettings.renderScale to increase resolution. Warning: this will cost a lot of performance. 3) Use high-resolution fonts with anisotropic filtering. 4) Use signed distance field font rendering (available in packages such as TextMeshPro). 5) If the text is static, you can use OVROverlay to render it to the Rift and Gear VR with a special technique that bypasses double-sampling and increases sharpness.
Ok thanks for your answer. I'm already using point 1) and 3). Point 2) doesn't seems to be a viable solution. My text is not static so point 4) is definitively the good way. Is there any free or trial asset using distance field technique ? I'm currently using Unity beta, is there something on unity road map on it ? or in professional edition ?
Ok dealing with Typogenic (https://github.com/Chman/Typogenic) I have some good result. It's perfect when I approach the text. But when I put the text far from the camera there still is the pixelization. So I'm looking for a solution like mipmap for texture. Is the distance field technique should resolve that problem ?
> But when I put the text far from the camera there still is the pixelization.
SDF is mainly helpful for magnification. For minification you should absolutely use texture filtering. Make sure trilinear or anisotropic filtering is enabled on your font texture using font.material.mainTexture.filterMode.
One way to control the rendering of very small text is to render the text block to a render target and generate good mipmaps and let the HW do its thing. This is the best solution for text that can 'spin' 360 at a distance.
But you might actually get better result using Textmesh pro, without the need to use a render target. You get better sub pixel evaluation with SDF, less blurriness with small text and less artifacts. I think its hard to get better rendering of small text that move/scale in a scene then what TMPro delivers with its SDF system (and it look specially great zoomed in)
The code also take into account feature like outline that get thinner then 1 pixel. So when your text is very far/small, thin outline look and behave correctly.
But sampled SDF have one weakness, and that is anisotropic sampling (when your text is at an extreme viewing angles) So the vertex shader evaluate the angle and expose a controllable filter that can be used to pretty much eliminate shimmer in this situation.
Ok ok thank for all the information. Using the Glow option from the typogenic shader (with texture filtering and mipmap) gives a good render for minification without much digression for magnification. Right now I'm just using black text on white background with a simple font for testing. Maybe I will use TMPro later if I need more effect.
Here is a sample of smooth sup pixel font in TMPro. (Pick a font and start typing.) You can scale/move it at sub pixel position without any weird rendering happening
If you go smaller then this the character feature get smaller then 1 pixel, and thats when you can fade it out or compensate by dilating.
Another test case to better show the rendering quality with complex font and rotation.
The TMPro shader should also be at least 2x faster then typogenic and doesn't require pixel shader derivatives.