Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
benvoss's avatar
benvoss
Honored Guest
10 years ago

Highest quality 360 video in Rift and Gear VR created with Unity, what is possible, what do we know?

Honestly I am confused when it comes to the state of 360 video (lets talk monoscopic for the sake of this discussion for now). We use Rift CV 1 at live events and Gear VR for field use with our clients. So far we have been navigating the novelty VR wave very well (pharma and education) but as of late we are seeing push backs and concerns especially in terms of video quality. We develop all our solutions in Unity at them moment utilizing AVPro to project a 4k equirectangular video file onto  a sphere. In the past we also have successfully used the Easy Movie Texture. Video plays back fine with both solutions, but the fact remains that I am zooming into a 4k video with less than 720p quality remaining in front of my eyes. I am talking especially about text within videos which always seems blurry and pixelated.

Here is the thing: I have seen better qualities by now in players like Little Star's and some stuff I have seen at OC3. 

So what is the next step here? Cubemaps, adaptive dynamic streaming all which have been mentioned at OC3 again, but then it  is very sparse to find anything about it here or on the web in general.

I am currently planning a big 360 shoot for a project kicking off next month and I am confused as if we need more than 4k to support adaptive ideas.

There are two particular scenarios I am concerned about and interested in what other are doing to solve these issues:

A: There is a very interesting discussion on this forum where to author talks about a Penguin scene and mentions that we technically could only use video for the moving segments of a scene. So lets say we look at the scene as a cubemap and we would only use one side as a 4k video segment and the rest as still frames, this could be of course also done in a sphere with some transparency shaders. Did somebody do this successfully and what camera would you use for such a setup as a 4k Gear 360 cannot deliver more than 4k in total.

B: More importantly we have moving scenes. e.g. driving a convertible car. These scenes don't easily qualify for the A solution since we have movement everywhere. This is where above mentioned adaptive ideas come to mind where we show selective parts of the videos based on the users head position, this would require many video versions and some good prediction algorithm. I want to point out that I want to make the discussion about quality and not about size in terms of streaming or size of the applications. What have you done and what is available to "normal" people or even via licensing models at this moment.

Just to throw it out there: Are we stuck with Unity here? I hope not but I want to keep this discussion open.

I want to thank anyone who found the time to read through this long post and hope that we can get a discussion going here, even if you just shoot me a bunch of links.

I pledge that I will keep this post updated with findings and approaches we will implement to solve the quality issue to push the quality of VR forward and strengthen our sales pitch.




3 Replies

Replies have been turned off for this discussion
  • utilizing AVPro to project a 4k equirectangular video file onto  a sphere

    Instead, try using a cubemap and OVROverlay and configure it as an underlay so the other content you render appears on top. You can assign different cubemaps for the left and right eyes to get stereo.
  • benvoss's avatar
    benvoss
    Honored Guest
    Thank you for you quick answer.
    I have come across OVROverlay during my research many times now and honestly don't understand how it could help me as it's cubemap support is limited to mobile while my quality issues are present within the rift as well.
    Also just to double check: you mean instead of equirectangular use cubemap but I still need a video script like AVPro to read the cubemap video and project it correctly but attach OVROverlay to the GameObject to allow rendering to the compostion layer? Are there any examples for this as the OVR sample framework don't seem to use for video playback?        
  • OVROverlay can currently render only quads on Rift. We are adding cubemap support soon, but you could emulate it today by drawing 6 world-locked quads for the cube faces. You would have to write a script to make the cube follow the head and decde and copy the cube face textures to plain D3D11Texture2D resources in a native plugin that uses ffmpeg's img_convert function. I'll see if we can put together a sample for this.