Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
abp_dk's avatar
abp_dk
Honored Guest
10 years ago

How to combine/composite cameras in a VR scene?

I am working on a scene that renders selected parts of the scene in high detail via viewport clipping. I am doing this using 2 cameras. One camera renders the full scene in low detail and another renders the center of the viewport in high detail. The 2 RenderTextures of the cameras are finally combined using a 'combine depth' shader.



I am using the Oculus DK2, Unity 5.2.3f1 and the Oculus Utils v.0.1.3.0. Render path is 'forward' and using non-HDR cameras.

Currently I am doing all the above by placing a custom Image Effect on the 'CenterEyeAnchor' camera. I have grown unsure whether this is the correct approach as I need to take a few odd steps to make this work. I e.g. need to double the height of the involved RenderTexures:

line 32 of CombineDepth.cs

//TODO: Why do we have to multiply height by 2? Due to stereoscopic display?
_FarRT = new RenderTexture(Screen.width, Screen.height * 2, 16,
RenderTextureFormat.ARGB32);
_NearRT = new RenderTexture(Screen.width, Screen.height * 2, 16,
RenderTextureFormat.ARGB32);


My intuition tells me that I need to do this camera compositing for both the left and right camera and then let the OVR package combine that. That would require some manual modifications to the OVR package scripts. Before I dive into that, I'd like to hear if anybody else has experience with this?

So my questions are:

  • What is the correct procedure for doing camera compositing/combining when doing VR in Unity using the OVR package from Oculus?

  • Is it possible to do camera compositing/combining in VR using the 'integrated VR support' in Unity


The project can be downloaded from here

2 Replies

Replies have been turned off for this discussion
  • "abp_dk" wrote:
    Is it possible to do camera compositing/combining in VR using the 'integrated VR support' in Unity

    Unity 5 treats the VR eye buffers similarly to the non-VR backbuffer. To reduce latency, it also updates the eye cameras' transforms on the render thread, just before submitting commands to the graphics driver. In the future, we may add support for per-Camera renderscale, which would make it easier to do what you're proposing. Today, you can render to multiple RenderTextures, but there isn't a way to make the low-resolution images line up perfectly with the high-resolution ones. There will be some slight wobble in the low-resolution images (which might not be very noticeable). Screen.width and Screen.height correspond to Unity's main-monitor window, not the eye buffers. So the low-resolution eye buffer(s) resolution should be completely independent of those. I would suggest using a square texture like 512 x 512. But make sure to use the same FOV and aspect ratio as the VR Camera (the one where targetTexture == null). You can then blit the RenderTexture(s) to the eye buffers using Graphics.Blit or Graphics.DrawMeshNow.

    "abp_dk" wrote:
    What is the correct procedure for doing camera compositing/combining when doing VR in Unity using the OVR package from Oculus?

    OVRCameraRig is a layer on top of Unity's built-in VR support, so the same recommendation applies. You might have more luck splitting the scene into near and far content, where one VR camera renders the near content at a high LOD and another VR camera renders the far content at a lower LOD. The far camera renders first and both cameras target the same eye buffers, without using any special RenderTextures. An example is in our SDK examples under "Layered Cameras". This is currently the only way to get both cameras to line up without significant wobble.
  • abp_dk's avatar
    abp_dk
    Honored Guest
    Thanks for your feedback @vrdaveb

    Unity 5 treats the VR eye buffers similarly to the non-VR backbuffer. To reduce latency, it also updates the eye cameras' transforms on the render thread, just before submitting commands to the graphics driver. In the future, we may add support for per-Camera renderscale, which would make it easier to do what you're proposing. Today, you can render to multiple RenderTextures, but there isn't a way to make the low-resolution images line up perfectly with the high-resolution ones. There will be some slight wobble in the low-resolution images (which might not be very noticeable). Screen.width and Screen.height correspond to Unity's main-monitor window, not the eye buffers. So the low-resolution eye buffer(s) resolution should be completely independent of those. I would suggest using a square texture like 512 x 512. But make sure to use the same FOV and aspect ratio as the VR Camera (the one where targetTexture == null). You can then blit the RenderTexture(s) to the eye buffers using Graphics.Blit or Graphics.DrawMeshNow.

    I'll try a new implementation approach based on your feedback and see where that takes me. Will post my results here.

    OVRCameraRig is a layer on top of Unity's built-in VR support, so the same recommendation applies. You might have more luck splitting the scene into near and far content, where one VR camera renders the near content at a high LOD and another VR camera renders the far content at a lower LOD. The far camera renders first and both cameras target the same eye buffers, without using any special RenderTextures. An example is in our SDK examples under "Layered Cameras". This is currently the only way to get both cameras to line up without significant wobble.

    Sadly, a distance based LOD is not quite what I am looking for. My implementation has very specific demands that require me to reduce the rendering area of one camera using viewport clipping or similar view frustum modification.