cancel
Showing results for 
Search instead for 
Did you mean: 

Half your rendering load?

Leonard_Powers
Protege
Following the "eye-Patch" discussion on Reddit, I made a suggestion that could either be genius or simply nausea inducing.

Rendering two view points each time to update the screen is very process intensive.
So why just don't? Only render one eye and then the other. Think of it as "Asynchronous Low Persistence"

Every frame you send, you only render one eye and the other one is left black.
In the next frame, you render the view point of the other eye.
Obviously, this would only work at 75fps.

Please don't reply to this post with opinions on why you think it wont work.
I'd be interested to see if someone could implement and test this theory.
6 REPLIES 6

rjoyce
Honored Guest
I'm fairly sure this has been discussed on here before. I believe someone (from Oculus, maybe) chimed in saying "Tried this, made everyone sick" or something to that effect.

It seems like it would be a lot of work to do using the SDK rendering, though someone familiar with the SDK source could probably place in the proper hooks to support it.

A naive way of doing it using the SDK rendering would be to cycle through the textures/head poses and "hold on" to the last one for the non rendered eye. Timewarp might actually help as it could redistort the old texture to a better position, or it may make it puke inducing as it distorts the older eye image way more than the new image.

Leonard_Powers
Protege
Thanks for the reply, I have a feeling that this might not work either (saving 50% sounds too good to be true), but seeing as we are on uncharted territory I would be good to "know", rather than assume 🙂

rjoyce
Honored Guest
Here's another thread on this topic:
https://developer.oculusvr.com/forums/viewtopic.php?t=13161

I'm sure there's been others, but the one I'm thinking about may have been pre-timewarp.

Looks like it might be possible, I'm not sure if jherico has actually tried it but he seems to think it won't be instantly sickness inducing in that thread.

I think if you're not developing a game where you need/expect rapid head movements as input, then this may actually work.

I'm going to try this, maybe later this week, I'll let you know how it works.

Anonymous
Not applicable
I think that it can sort of work for camera rotation movements by timewarping the last half frame but it wont really work for translation movements.
Also there is going to be problems with occlusion.

In fact I have been thinking about it quite a bit and what we might need is a way of render one eye, transform it to the other view like in timewarp, determine the unoccluded zones and render them so we just really render one eye view+little bits of the other. Or kind of render both eyes at the same time and then apply one transform for each eye that also discards the pixels not meant for that eye but I don't know if there is a rasonable way of doing that. Space-time warp would be a kickass name for it.

wthlee
Honored Guest
bad idea..

its just what shutter glasses did, and we all remember how you needed a 120hz refresh rate just to get 60 fps per eye. (and we barely manage to get close to 100 on oleds. i think the current hdmi standard is also limiting)

with both perspectives rendered at once, you can half the refresh requirement to get to at least 60 fps.

Leonard_Powers
Protege
"rjoyce" wrote:
I think if you're not developing a game where you need/expect rapid head movements as input, then this may actually work.

VR Chess or Card Games for example on GearVR could really benefit from this.
Let me know how you get on 🙂