Forum Discussion
Becoming
13 years agoHonored Guest
Multiple layered cameras for large viewing distances...
For our game WSF (a realistic wingsuit simulation) we need a very close near clip and very far far clip planes. For the non VR version we solved this by having three cameras layered on top of each other:
Near Cam - Nearclip/Farclip: 0,01/2,20 - Depth: 0 - Clear Flags: Depth only
Mid Cam - Nearclip/Farclip: 2,00/5010 - Depth: -1 - Clear Flags: Depth only
Far Cam - Nearclip/Farclip: 5000/25000 - Depth: -2 - Clear Flags: Skybox
(little overlap is necessary to cover ptential gaps)
In our VR version we want to do the same but it is a bit tricky. I was thinking i could just parent the Mid and Far Cam to the Near Cam and apply the lens correction only to the cam rendered on top... From what i see, OVR used render to texture for the lenscorrection so this approach is not working. So far so good, then i was thinking i just apply the LC to all cameras but this is still not working. When looking through the Rift i see everything doubled. I assume the LC need the cameracontroller and the Cameracontroller can only handle 2 cameras.
So my workaround now is to have 2 complete OVR camera controllers layered on top of each other(very similiar to the solution above) and it works but of corse it is not an ideal solution. The tracking has to be done twice and when i move my head quickly i can see the edge of the first OVR camera controller moving at different speed than the second.
I'm no coder but an artist, so sorry if this is a stupid way to do it... however, if someone here has messed with such a muticam setup and can put me on the right track i can get the help of a programmer. It would help a lot if i can point him into the right direction. Help is much appreciated!! So..., is there a better way to do a layered multicam setup than just using multiple camera controllers?
Thanks,
Peter
Near Cam - Nearclip/Farclip: 0,01/2,20 - Depth: 0 - Clear Flags: Depth only
Mid Cam - Nearclip/Farclip: 2,00/5010 - Depth: -1 - Clear Flags: Depth only
Far Cam - Nearclip/Farclip: 5000/25000 - Depth: -2 - Clear Flags: Skybox
(little overlap is necessary to cover ptential gaps)
In our VR version we want to do the same but it is a bit tricky. I was thinking i could just parent the Mid and Far Cam to the Near Cam and apply the lens correction only to the cam rendered on top... From what i see, OVR used render to texture for the lenscorrection so this approach is not working. So far so good, then i was thinking i just apply the LC to all cameras but this is still not working. When looking through the Rift i see everything doubled. I assume the LC need the cameracontroller and the Cameracontroller can only handle 2 cameras.
So my workaround now is to have 2 complete OVR camera controllers layered on top of each other(very similiar to the solution above) and it works but of corse it is not an ideal solution. The tracking has to be done twice and when i move my head quickly i can see the edge of the first OVR camera controller moving at different speed than the second.
I'm no coder but an artist, so sorry if this is a stupid way to do it... however, if someone here has messed with such a muticam setup and can put me on the right track i can get the help of a programmer. It would help a lot if i can point him into the right direction. Help is much appreciated!! So..., is there a better way to do a layered multicam setup than just using multiple camera controllers?
Thanks,
Peter
30 Replies
Replies have been turned off for this discussion
- FredzExplorerThis thread may be relevant, especially at the end :
viewtopic.php?f=37&t=274
Are you going to release a Linux version for Wingsuitflyer by the way, I'd love to play it ? :) - sh0v0rProtege
"Becoming" wrote:
For our game WSF (a realistic wingsuit simulation) we need a very close near clip and very far far clip planes. For the non VR version we solved this by having three cameras layered on top of each other:
Near Cam - Nearclip/Farclip: 0,01/2,20 - Depth: 0 - Clear Flags: Depth only
Mid Cam - Nearclip/Farclip: 2,00/5010 - Depth: -1 - Clear Flags: Depth only
Far Cam - Nearclip/Farclip: 5000/25000 - Depth: -2 - Clear Flags: Skybox
(little overlap is necessary to cover ptential gaps)
In our VR version we want to do the same but it is a bit tricky. I was thinking i could just parent the Mid and Far Cam to the Near Cam and apply the lens correction only to the cam rendered on top... From what i see, OVR used render to texture for the lenscorrection so this approach is not working. So far so good, then i was thinking i just apply the LC to all cameras but this is still not working. When looking through the Rift i see everything doubled. I assume the LC need the cameracontroller and the Cameracontroller can only handle 2 cameras.
So my workaround now is to have 2 complete OVR camera controllers layered on top of each other(very similiar to the solution above) and it works but of corse it is not an ideal solution. The tracking has to be done twice and when i move my head quickly i can see the edge of the first OVR camera controller moving at different speed than the second.
I'm no coder but an artist, so sorry if this is a stupid way to do it... however, if someone here has messed with such a muticam setup and can put me on the right track i can get the help of a programmer. It would help a lot if i can point him into the right direction. Help is much appreciated!! So..., is there a better way to do a layered multicam setup than just using multiple camera controllers?
Thanks,
Peter
I do this in Lunar Flight to ensure you can look down and see your body I have a Cockpit Camera Controller with a very low near and far clip, and a World Camera Controller that has a larger near clip to prevent Z Buffer issues on distance objects.
Tracking is only sampled once from one camera which I believe from memory is the Right Camera with Depth 0. Any other other camera controllers share global orientation data from it.
You only want Lens Correction on the highest depth cameras as they are rendered last and will do the final frame buffer distortion.
So what you want is two separate Camera Controllers Parented to a shared gameobject. As I mentioned the Tracking is only sampled once and all Camera Controllers share the information so dont parent a camera controller to another Camera controller. Have them at the same hierarchy level with a local position/rotation of (0,0,0) - BecomingHonored GuestThanks sh0v0r, this is what we are doing now... i was not sure if this is the best way to do it. I figured out how to apply the LC only onto the guy layer and having it turned off on the lower depth cams, just like you suggested.
Are you sure that tracking is only done once? if i move my head quickly i have the impression that the farcam reacts a little bit later than the near cam. I was talking with the programmer in my team about the ovr related stuff and after he looked at the scripts, he said that many things are not done in the best way in terms of performance. Could also just be that the 2nd cam contoller is just a frame later because of that.
We have also found that the "follow orientation" in the ovr camera controller has some issues(not useable for orbiting cameras) but i guess i should address this issue in a seperate thread.
@Fredz: i was looking for a thread like the one you kindly linked but did not find anything. Thanks a lot for pointing it out for me! Yeah, we'll release a rift version of wingsuitflyer for pc ,mac & linux. We will take a little money for it though, basically it will be a free2play title but we need some support to get there... supporters will be rewarded somehow, we'll think of something nice. However, the prototype is already very promising and its quite an unique experience, we are working on the multiplayer part now and of course on getting the ovr camera setup perfect.
Thanks a lot for the help!!
We will be using a fairly complex camera setup and i see that we are not the only ones who are messing with layered cameras, if feedback for the ovr camera controller is needed we'll gladly give what we've learned and will learn on the way... we would love to contribute somehow to an improved camera controller! - drashHeroic Explorer
"Becoming" wrote:
Are you sure that tracking is only done once? if i move my head quickly i have the impression that the farcam reacts a little bit later than the near cam. I was talking with the programmer in my team about the ovr related stuff and after he looked at the scripts, he said that many things are not done in the best way in terms of performance. Could also just be that the 2nd cam contoller is just a frame later because of that.
It should actually be done just once for the OVRCamera with Depth = 0. OVRCamera's SetCameraOrientation() grabs the orientation from the Rift here and then sticks it into a static variable for the rest of the OVRCameras to use (including those from other OVRCameraControllers).
And actually, that might explain your issue if you have your other cameras with negative depth values! Maybe for now, you could start with depth = 0 and working your way up instead? I don't know if there is a best practice that camera depths must be >= 0, so maybe in the future the SDK could look for the lowest depth and start there or something. - BecomingHonored GuestThanks drash! We start at 0 depth but we have an Gui layer that has negative depth values... i'll try to put this one to 0 and start from there.
I think though that an ideal solution of the ovr camera controller would give the possibility to attatch more cameras as childs to the foremost rendered cams. Maybe we can modify the scripts to achieve this, as with each camera controller the cpu ms raise quite drastically.
BTW: Love the titans of space demo, its actually my favourite one and always the first to show to people who are new to the rift :) - ssshakeHonored GuestCan you explain what "We start at 0 depth" means? I'm not sure I follow. I have both camera controllers at the same level in my shuttle's hierarchy. I disabled scripts on the second one. Head tracking seems to affect both. But when I play the game, I only see the far camera being rendered.
- KnitschiHonored GuestThis is a little bit off-topic, but does anyone know if the graphic card vendors are planning to upgrade the z-buffer to double precision to get rid of all the problems that are caused by the low precision?
- mzandvlietHonored GuestHere's an interesting article from the creator of the Outerra engine, which concludes that it's not so much the floating point precision that is lacking, but the way in which that precision is used. There's way too much precision in the first couple of meters beyond the near clipping plane, leaving too little for far away geometry. A different distribution would would fix rendering distances beyond a 100 kilometers, while retaining enough resolution up-close.
http://outerra.blogspot.com/2012/11/maximizing-depth-buffer-range-and.html
Note this bit specifically:The use of floating-point values in depth buffer doesn’t bring much if used directly: there’s an increased dynamic range close to zero, but since the depth buffer already uses most of the value range in this region, it’s not useful at all.
Flipping the depth buffer values to get the most precision far away is mentioned, and Just Cause 2 seems to do this. http://www.humus.name/index.php?page=Articles (download the Populating Massive Worlds presentation) - sh0v0rProtegeCan some one from Oculus help with getting a clear answer on whether this should work or not in the latest SDK and how to set it up?
I have not yet been able to get the correct result but I have got very close. I can see a duplicate in the Left eye that is offset.
Here is how I am currently doing it.
OVRCCameraController1 (World Camera, Large Nearclip(0.5f))
- CamRight Depth = 0
- CamLeft Depth = 1
- Lens Correction = Off
OVRCameraController2 (Cockpit Camera, Low Nearclip(0.01f))
- CamRight Depth = 2
- CamLeft Depth = 3
- Lens Correction = On
How should the other components on the Camera Controller be setup?
Does the 'World' camera need the camera component since it is not applying the barrel distortion? - vrdavebOculus StaffYour camera setup looks good, sh0v0r. We have seen cases where the World camera's output gets dropped when using image effects, HDR, or deferred lighting on the World camera rig, but they aren't 100% consistent. You may need to enable "Use Camera Texture" on the OVRCameraController. If that has no effect, I may be able to send a code change that splits the RenderTextures up on a per-eye basis like we did in 0.3.1. The World camera GameObject with the OVRCameraController on it should not have an active OVRDistortionCamera or Camera component. You can just disable the ones that are already there.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 8 months ago
- 3 years ago
- 2 years ago
- 9 months ago
- 3 years ago