Forum Discussion
hacKTorVR
11 years agoHonored Guest
Dynamic resolution: Camera.rect.x and y VS "Scale Render"
SDK 0.4.2 beta
Hi there,
I implemented some kind of a dynamic resolution script that adjusts the "resolution" of the two eye cameras.
First I wanted to do this via "Scale Render" of OVRCameraController, which didn't work at all. In my case Scale Render DID reduce resolution, but by rendering a smaller rectangle of the original rect AND also not even the center of the original rect, but the bottom left corner. I Don't know if this behavior is intended, but I at least couldn't make sense of it or somehow "recenter and zoom out" to keep the same view (see screenshots).
http://www.fierythings.com/games/Gooze/GoozeDemo_ScaleRender.jpg
BUT, what I came across was that adjusting the rect.x and rect.y properties of the two cameras seemed to have the effect I wanted. I must say, I have NO clue why the position of these viewports is supposed to adjust the resolution of the cameras, but I'm guessing it has something to do with how the unity integration renders to rendertextures and then combines everything together again.
So in my case dynamically pushing the cameras' rect.x/y values towards 1.0 reduces resolution and at 0.0 you basically have the original resolution. I used a threshold value for the FPS, so my script always tries to aim for the perfect balance of resolution and quality, though I also limited it to 0.5, so the resolution wouldn't drop beyond a certain amount. And it only performs a resolution change every 0.5s, so you wouldn't loose too many FPS because of the overhead. This seems to work quite well and the quality difference is almost not visible at all.
Well I wanted to share this with you guys. Maybe someone else has some opinion on this or made different experiences? Also I don't know if at some point my solution is going to break because of SDK changes?
Cheers
Hi there,
I implemented some kind of a dynamic resolution script that adjusts the "resolution" of the two eye cameras.
First I wanted to do this via "Scale Render" of OVRCameraController, which didn't work at all. In my case Scale Render DID reduce resolution, but by rendering a smaller rectangle of the original rect AND also not even the center of the original rect, but the bottom left corner. I Don't know if this behavior is intended, but I at least couldn't make sense of it or somehow "recenter and zoom out" to keep the same view (see screenshots).
http://www.fierythings.com/games/Gooze/GoozeDemo_ScaleRender.jpg
BUT, what I came across was that adjusting the rect.x and rect.y properties of the two cameras seemed to have the effect I wanted. I must say, I have NO clue why the position of these viewports is supposed to adjust the resolution of the cameras, but I'm guessing it has something to do with how the unity integration renders to rendertextures and then combines everything together again.
So in my case dynamically pushing the cameras' rect.x/y values towards 1.0 reduces resolution and at 0.0 you basically have the original resolution. I used a threshold value for the FPS, so my script always tries to aim for the perfect balance of resolution and quality, though I also limited it to 0.5, so the resolution wouldn't drop beyond a certain amount. And it only performs a resolution change every 0.5s, so you wouldn't loose too many FPS because of the overhead. This seems to work quite well and the quality difference is almost not visible at all.
Well I wanted to share this with you guys. Maybe someone else has some opinion on this or made different experiences? Also I don't know if at some point my solution is going to break because of SDK changes?
Cheers
6 Replies
Replies have been turned off for this discussion
- drashHeroic ExplorerHi, thank you for sharing this! Quite an interesting find. Are you using deferred rendering and/or image effects? I recently ran into the same bad scale render behavior you described, and tracked it down to either using deferred rendering or using any image effects.
But you're probably right, I'm not sure you can rely on it to keep working as the SDK gets updated, but it's a great temporary measure.
One question -- how are you adjusting the dynamic resolution back up if we're pretty much locked in with vsync at this point? Sure, you'll see a dip below 75 and reduce resolution, but if it goes up to 75 it of course just stops there so how do you know how far you can increase the resolution? I've tried measuring CPU and deducing GPU, but I'm only able to figure out if a program is CPU- and/or GPU-bound, not determine the actual un-limited FPS. - hacKTorVRHonored GuestYes, I am using deferred rendering, no image effects though for now.
And yes I can confirm, that Scale Render seems to work as expected in forward rendering (once I had my script, it was fairly easy to also implement a "scale render mode" for forward rendering), though it does glitch when changing the value (editor: very strongly, build: probably acceptable but still slightly sensible). I'm guessing it comes from recentering/positioning the adjusted rendertextures.
That's also why the same glitch doesn't happen in deferred when using the camera.rect.x/y method, as the rendertextures don't seem to be needed to be recentered again. But … of course (!) … camera.rect.x/y only works in deferred and NOT in forward. In the latter you can actually see the rendertextures scaling up and down to the top right corner.
To your question about scaling the resolution back up: I am using a fps tracking method almost identical to the one in OVRMainMenu and an update interval of 0.5s. I'm not sure how these values are really connected to the vsynced reality. But at least in my case with a Mac, I do get fps values that are greater than 75Hz like this. Though I buffer out at least one interval before scaling up again, to reduce overhead on spikes. - owenwpExpert ProtegeMost likely the problem is that Unity does not use viewports at all in deferred rendering mode. Instead it allocates new buffers at the desired viewport size, then resets the viewport to fullscreen, and after all rendering is complete it composits the buffer onto the backbuffer at the desired position.
This means that not only does it not do what the Oculus SDK requires, but changing the viewport size allocates GPU and system memory causing very noticeable stalling. Doing it frame by frame is impractical. - hacKTorVRHonored GuestSo you're saying using deferred rendering in general is not the way to go with the Oculus SDK, or just that "changing the viewport size is bad" (whatever exactly that means in practical terms)?
If it's the latter, somehow changing the position (aka camera.rect.x/y) should actually be a "good" thing compared to changing the viewports size as I think Scale Render is doing? I'm guessing AS in deferred rendering using camera.rect.x/y somehow affects the resolution it must still be doing a resize of the viewports somehow though.
And yes it's true doing this on every frame has a more than significant impact on the overall frame rate, but doing it in maybe an interval of 0.5s (like I wrote before) it certainly does a good job of balancing between overhead and the reduction of judder.
Of course it's always best to just design everything that no judder occurs in the first place, but in some cases it's just very hard and time consuming or simply not feasible.
Anyway thanks for the explanation about how Unity handles things differently in deferred rendering!
Off topic: Btw I'm looking forward to receive my STEM system! ;) - owenwpExpert ProtegeDeferred works with the Oculus SDK as long as the render scale isn't changed (and some other caveats that they have worked around, like off-center projection not working). No matter how infrequently you change the viewport, you will almost certainly be dropping frames and introducing more judder. Having it run smooth most of the time and only judder at some predictable interval is almost worse than constant judder.
And we cant wait to ship them! - hacKTorVRHonored GuestThis all sounds very reasonable. But I swear my setting (OSX 10.9.5, Unity 4.5, OVR SDk 0.4.2 beta, Macbook pro) in deferred rendering with adjusting camera.rect.x/y DEFINITELY improves the experience over NOT using this approach. I cannot say if this actually does alter the vieports like you described, but it certainly doesn't add extra judder. Like I wrote before, with forward rendering and adjusting Scale Render you notice extremely slight "jumps", due to the repositioning of the rendertextures (I'm guessing), but even that is better than actual judder due to a significant framerate drop that might happen in a more intensive render situation.
Did you give camera.x/y a try? Because I tried it with around 7 people and asked everybody if they experienced any judder and they didn't :D
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 10 months ago
- 6 months ago