Forum Discussion
davehampson
11 years agoHonored Guest
Distributed Timewarped Rendering - An Idea
(Crossposting from here: http://forum.unity3d.com/threads/distributed-timewarped-rendering-an-idea.290024/ - let me know your thoughts in either place!)
I had an idea this morning for a new type of VR scene rendering which uses two devices. I haven't written any code for this yet, but I find the concept extremely exciting. The basis for this idea is in a few places:
- John Carmack's Timewarp (obviously)
- The fact that Gear VR is very good at playing 360 degree video content, but the bandwidth is too huge to transmit it over the internet
- Steam's Home Streaming technology (which proved to me that you can have low latency gaming over Wifi at 720p and beyond)
- Nate Mitchell talking at CES about how Wireless HDMI "isnt quite there yet"
So the basic idea is this:
- User launches VR 'server' experience on PC
- VR server opens up a TCP port
- PC starts rendering to an internal cubemap image (at say 6x2048x2048) at the player's position
- PC also renders a depthmap for each face
- User launches VR 'client' on (say) Gear VR
- Client connects to server and starts to receive 6x2048x2048 colour+depth video stream
- Client renders it, but uses the updated head position and orientation to Timewarp the image to be exactly correct
The key thing here is that the bulk of the rendering is happening on a high power device, but the final timewarp on the Android device means latency is low. So the mobile device (with limited GPU power) is using all its grunt simply timewarping the cubemap to look correct, at the highest possible framerate. It may also be possible to clock it down for less overheating. It also means that (like with standard Timewarp) the PC GPU doesn't have to hit 60-90fps, it could quite easily run at 30fps and as long as the player didn't move too fast, it would still look largely correct.
What's (one of the) potential problems with this technique? Well, the Wifi connection is going to be on all the time, which could flatten the battery quickly.
What would be the next step? Well I think step one would be for someone to try rendering a 6x2048x2048 cubemap with colour+depth and try displaying it on a headset with Timewarp.
The next step would be to create an agreed protocol for the TCP connection: ideally it would have these properties:
- An open protocol so anyone can implement it or support it (it could be even be retrofitted into an existing PC game)
- Video stream for 6x2048x2048
- Expandable for different video compression techniques
- Support for just updating a portion of the cubemap instead of the whole thing (e.g. if the user is currently facing 'North' you can probably omit transmitting the 'South face' since they aren't going to be able to move their head 90 degrees in under <50 ms). Maybe even just send 3 cubemap faces, or 1 cubemap face and a portion of 3 or 4 others.
- Communication from client -> server on head position and orientation
- Communication from client -> server on joypad control state
The next step would be to implement that in a game engine. Imagine if this could be a Unity Prefab that people could import and drop into their scene, this would instantly turn any game into a cable-free Gear VR experience.
I thought "Distributed Timewarped Rendering" sounded like a good name for it, since it conveys the idea that the Timewarp process is happening not just in different parts of the engine, but also on different physical GPUs.
Another advancement which could be made would be building on the idea of Distributed Timewarp, and making a more complex distribution, for example:
- World: PC GPU, Tonemapping: PC GPU, Characters: Android GPU, HUD: Android GPU, Shadows: Android GPU
- World: PC GPU, Tonemapping: PC GPU, Characters: PC GPU, HUD: Android GPU, Shadows: Android GPU
Or even rendering multiple cubemap faces on different physical PCs. Or the reverse situation: allowing multiple headsets to connect as clients to the same PC game and serving multiple cubemaps, for a great multiplayer experience.
I'm starting to think that this technique could end up being a kind of 'Deferred Rendering for VR'. It's interesting that it suffers from some of the same problems (e.g. transparency doesn't work).
Thoughts? Has anyone done any of this already?
*edit* Just realised that this could also be used in exactly the same way for audio as for visuals, the HRTF function for each sound source could be calculated on PC, or on the Android device, or both. Is there a way to 'timewarp' a previously HRTF-ed sound for a slight change in orientation? Probably not yet, but it's an interesting concept!
I had an idea this morning for a new type of VR scene rendering which uses two devices. I haven't written any code for this yet, but I find the concept extremely exciting. The basis for this idea is in a few places:
- John Carmack's Timewarp (obviously)
- The fact that Gear VR is very good at playing 360 degree video content, but the bandwidth is too huge to transmit it over the internet
- Steam's Home Streaming technology (which proved to me that you can have low latency gaming over Wifi at 720p and beyond)
- Nate Mitchell talking at CES about how Wireless HDMI "isnt quite there yet"
So the basic idea is this:
- User launches VR 'server' experience on PC
- VR server opens up a TCP port
- PC starts rendering to an internal cubemap image (at say 6x2048x2048) at the player's position
- PC also renders a depthmap for each face
- User launches VR 'client' on (say) Gear VR
- Client connects to server and starts to receive 6x2048x2048 colour+depth video stream
- Client renders it, but uses the updated head position and orientation to Timewarp the image to be exactly correct
The key thing here is that the bulk of the rendering is happening on a high power device, but the final timewarp on the Android device means latency is low. So the mobile device (with limited GPU power) is using all its grunt simply timewarping the cubemap to look correct, at the highest possible framerate. It may also be possible to clock it down for less overheating. It also means that (like with standard Timewarp) the PC GPU doesn't have to hit 60-90fps, it could quite easily run at 30fps and as long as the player didn't move too fast, it would still look largely correct.
What's (one of the) potential problems with this technique? Well, the Wifi connection is going to be on all the time, which could flatten the battery quickly.
What would be the next step? Well I think step one would be for someone to try rendering a 6x2048x2048 cubemap with colour+depth and try displaying it on a headset with Timewarp.
The next step would be to create an agreed protocol for the TCP connection: ideally it would have these properties:
- An open protocol so anyone can implement it or support it (it could be even be retrofitted into an existing PC game)
- Video stream for 6x2048x2048
- Expandable for different video compression techniques
- Support for just updating a portion of the cubemap instead of the whole thing (e.g. if the user is currently facing 'North' you can probably omit transmitting the 'South face' since they aren't going to be able to move their head 90 degrees in under <50 ms). Maybe even just send 3 cubemap faces, or 1 cubemap face and a portion of 3 or 4 others.
- Communication from client -> server on head position and orientation
- Communication from client -> server on joypad control state
The next step would be to implement that in a game engine. Imagine if this could be a Unity Prefab that people could import and drop into their scene, this would instantly turn any game into a cable-free Gear VR experience.
I thought "Distributed Timewarped Rendering" sounded like a good name for it, since it conveys the idea that the Timewarp process is happening not just in different parts of the engine, but also on different physical GPUs.
Another advancement which could be made would be building on the idea of Distributed Timewarp, and making a more complex distribution, for example:
- World: PC GPU, Tonemapping: PC GPU, Characters: Android GPU, HUD: Android GPU, Shadows: Android GPU
- World: PC GPU, Tonemapping: PC GPU, Characters: PC GPU, HUD: Android GPU, Shadows: Android GPU
Or even rendering multiple cubemap faces on different physical PCs. Or the reverse situation: allowing multiple headsets to connect as clients to the same PC game and serving multiple cubemaps, for a great multiplayer experience.
I'm starting to think that this technique could end up being a kind of 'Deferred Rendering for VR'. It's interesting that it suffers from some of the same problems (e.g. transparency doesn't work).
Thoughts? Has anyone done any of this already?
*edit* Just realised that this could also be used in exactly the same way for audio as for visuals, the HRTF function for each sound source could be calculated on PC, or on the Android device, or both. Is there a way to 'timewarp' a previously HRTF-ed sound for a slight change in orientation? Probably not yet, but it's an interesting concept!
2 Replies
- lamour42Expert ProtegeIf I understand correctly you want to use a server supplied 3d video stream plus depth information to fake rendering a 3d scene on the client.
I don't think this will work if you have any dynamic in your scene: if either you move (change position in the 3d world, not just looking around) or something in the 3d scene moves the depth information you have won't help you rendering a correct scene.
Example: There is an object in front of you obscuring the scene behind it. It is a fast moving object, so the 'timewarp' should be able to accommodate it's movement. Having a video stream and depth information does not help: The client has no way of knowing that something is moving. Even if the client would know, how could it update the image? It wouldn't have any info on how to display something that was obscured. The depth info would only tell it that there is something in front of the player, but not what may be behind it. - davehampsonHonored Guest
"lamour42" wrote:
If I understand correctly you want to use a server supplied 3d video stream plus depth information to fake rendering a 3d scene on the client.
I don't think this will work if you have any dynamic in your scene: if either you move (change position in the 3d world, not just looking around) or something in the 3d scene moves the depth information you have won't help you rendering a correct scene.
Yeah that is a flaw, anything which moves which isn't the player will run at the framerate of the PC (say 30fps). Would that look jarring to have some elements running at 60-90fps and some at 30fps? I don't know, maybe. We've had games in the past which updated things like cubemaps at a slower rate. Of course there is the possibility of sending over a velocity buffer too and getting the Android client to tween pixels forward. ( Think this kind of technique: http://www.eurogamer.net/articles/digitalfoundry-force-unleashed-60fps-tech-article http://and.intercon.ru/releases/talks/rtfrucvg/ )
However the key thing here is that you have a z-buffer, so you are free to add to the scene on the 'client'. For example you could choose to render highly detailed backdrops on the PC 'server' and then add all the fast moving objects (bullets, players, particles) on the client. Or maybe you render all transparent objects on the 'client'.
Of course this gets incredibly complex at this point because you are effectively running a "1-player" game as a multiplayer LAN game, and rendering some objects on the PC and some objects on the Android headset.
It's an idea anyway, will it turn out to be important for VR rendering over the next 5 years? Is it far to complex to even consider. Who knows, maybe!
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 2 years ago
- 3 years ago
- 7 months ago