Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
Fragilem17's avatar
Fragilem17
Explorer
10 years ago

Thought experiment on stereo 3D and depthmaps

Hi Guys,

I've been studying on how to make my own 360 stereo pictures and video.
There are some great ones in Oculus Home on samsung Gear.
used VR player and others to evaluate 180/360 stereo video and tried a lot of stuff. Read up on whats coming and read a lot on this forum to see how people are approaching the problem.

The main problem in my opinion is the fact that there is no positional tracking. (obviously as it's pre-rendered or shot from a fixed position)

take a look at this: http://depthy.me/#/sample/tunnel

That's using a depth map (can be easily extracted from the google camera on an android phone) to do parallax. Naturally the algorithm is "imagining" pixels as there is only that much information (you cant look behind a door or something).

can't we somehow mix those 2 methods of presenting a stereo 360 video to the user.

i imagine an mp4/image where most of the texture resolution is used for 2 views from a distance apart (somewhat more then the normal IPD would dictate) and a somewhat lower resolution offline calculated depth-map. the video player would then in real-time mix the depth-map and the 2 views to generate a somewhat correct stereogram in which you can slightly move your head without getting sick and where the users exact IPD could be taken into account.

Has somebody worked on such a system? is something already in the works like that? Any thoughts on the feasibility?

Thanks already!
Tom

[EDIT: Sounds a bit like lightfield stuff as i re-read it. http://www.fxguide.com/featured/light-f ... -vr-ar-mr/]
[EDIT2: I've seen the demo from Ozon called Presenz and it blew me away, having pre-rendered 3D with position trackin is awesome, we just need an efficient way to capture it, store it and play it back.]

7 Replies

  • Anonymous's avatar
    Anonymous
    Depth map can help in certain circumstances (people have been experimenting with it) however as soon as you move much it reveals you're lacking visual information. If it's shown mono (no stereo) then there is no information being revealed when you shift your head to see more of the side. Stereo provides a little more but you still lack image information so you either reveal black or simply smear the pixels on the side of everything in the view like a rubber sheet. It's how some poor 2D to 3D converted movies look.

    Light fields are a different thing. They don't require depth maps although can use them since they have more image information. Light fields for are images are being rendered and they cover a certain range of motion whereas other videos are pre-rendered and locked.
  • I'm also interested in this. I have stereo 360 panoramas with depth map,would like to provide even a little bit of parallax for headsets with positional tracking.

    How do I achieve this? I'm using two cubes (one for each eye) with the 6 side skyboxes as textures. Only have gearVR so no way to test depth maps with positional traxking - maybe someone with a Rift can help?
  • I have both the rift and the GearVR to test with, i also really want the tech to use it in the next project.
    I don't have the skills to write the shader though, but might give it a try later. If anybody can help? :-s

    i'm guessing what we would need is a shader/material that does this when added to an inverted cube.
    (using Unity)

    take 6 textures for the left eye (1536x1535 each to make full use of the S6/Note4 resolution)
    take another 6 for the left eye depthmap (greyscale images i guess)
    6 for right
    6 for right depth

    Then some sliders to control/multiply the depth and stereo effect.
    The shader should be compatible with Unity's VR sytem so there is no need to have 2 cubes/2cameras and culling masks.

    I've attached a gif where my "workarround" for now is in. i've rendered multiple stereo cubes to have the parallax effect, it adds a great deal of realism, but you're also very aware of the stuff that is not correct. it only works if you have object with a clear space between them...
  • Yes, you can do this. It's called stereo reprojection, and was used (for example) to make Crysis 2 support 3D monitors. Basically it uses a single 2D image and a depth map to "reproject" the pixels into 3D space and adjust their appearance (for example, moving slightly to the side for a second camera view).

    The main issue is with occlusions. For example. in a first person shooter, you would not see anything behind the gun in the original render, so once you make it stereo (even worse with head-tracking parallax) you will have a halo of blank pixels around the gun. There are different ways you can try to mask the issue by stretching pixels or using nearby textures/colors but it will be a noticeable artifact.

    That said, you would have to weigh that against the artifacts you get with a static stereo cube map, namely that you can't roll your head, and that stereo becomes distorted the further away you look from the cardinal axes. So the occlusion halos may actually be a better choice in that respect, though it is a trade-off.
  • Thanks cyberreality, like most things in realtime CGI this to will be a trade-off and it's definitely not something that would work in every scenario.

    I've kept googling and came about this post on Relief Mapping with Correct Silhouettes. Tried the shader in Unity and with the simple "rocks on the ground" example, the result is stunning in VR! Really cool to have "correct" looking positional tracking on a bunch of photorealistic rocks using only a single quad.

    http://forum.unity3d.com/threads/fabio- ... tes.32451/
    (used reissgrant unity package in that post and modified line 89 in the shader: changed float4 to half3 for it to work in recent versions)

    Now going to do more tests and try the same thing with a 360° cubemap! Exciting!
  • allright, got some successes but it really is not usable at this stage. The artifacts are really noticeable real fast. I'm using 2 equirectangular spheres now (don't try with a box, it won't work) the effect is there and really cool but a bit to far in any direction and the illusion is broken.

    The shader is also way way to heave for use on mobile which defeats the purpose. (on desktop we could just keep most of it in geometry anyway)

    I'm not giving up on this, but a system to render a real lightfield and play that back at reasonable speed would be awesome and to me is the holy greal in this.
  • Yeah, I remember when parallax mapping first came out everyone thought it was the future but then it was too taxing and hardly any games ever used it. I believe hardware tessellation is viable (at least on desktop) though I don't know what the performance cost would be versus relief mapping.