Forum Discussion
cheerioboy
11 years agoExplorer
stereoscopic 3d & head tracking
Anyone have thoughts on how we can push believably in stereoscopic 3d with the upcoming '6-degrees-of-freedom' head tracking.
I think this subtle amount of head shifting would just seal the deal in believably of a standing static VR view, be it recorded footage or a 3d rendering.
My first thought is being able to use depth data on top of left/right image data to distort the view opposite the direction your head is tilting. For 3d we can simply render a depth map, for video you can use software to pull depth data from a stereo pair -
http://www.compression.ru/video/3d_display_video/depth_map_generation_en.html
http://www.yuvsoft.com/the-foundry-nuke-plugins/depth-from-stereo-nuke/
http://www.thepixelfarm.co.uk/product.php?productId=PFDepth
I wonder if any rift software would incorporate this, or if anyone has other ideas on this topic?
I think this subtle amount of head shifting would just seal the deal in believably of a standing static VR view, be it recorded footage or a 3d rendering.
My first thought is being able to use depth data on top of left/right image data to distort the view opposite the direction your head is tilting. For 3d we can simply render a depth map, for video you can use software to pull depth data from a stereo pair -
http://www.compression.ru/video/3d_display_video/depth_map_generation_en.html
http://www.yuvsoft.com/the-foundry-nuke-plugins/depth-from-stereo-nuke/
http://www.thepixelfarm.co.uk/product.php?productId=PFDepth
I wonder if any rift software would incorporate this, or if anyone has other ideas on this topic?
6 Replies
- mediavrProtegeThere is a technology called "concentric panoramas" where you have an off-axis rotating video camera and from the views it generates you can produce an interactive experience where you can move freely in a concentric annular zone around the centre of rotation. There used to be a demo of it online once. The demo had a very narrow vertical view though
>>The present invention involves a new approach to computing a 3D reconstruction of a scene and associated depth maps from two or more multiperspective panoramas. These reconstructions can be used in a variety of ways. For example, they can be used to support a “look around and move a little” viewing scenario or to extrapolate novel views from original panoramas and a recovered depth map.>>
https://www.google.com/patents/US6639596
Depth map generation from stereo panoramas and stereo panoramic videos is certainly possible -- either automatic or manual. With stereo panoramic videos you can use the approach where you produce manually and/or automatically high quality depth maps at key frames which act as cues to automatic depth for the intermediate frames -- see for instance Dimenco's @depth tools for semi-automatic conversion
http://www.dimenco.eu/our-products/depth/
Something like this optimized for panoramic content would be good
Kolor made a prototype software for depth map generation from vertically separated panoramas
http://www.kolor.com/blog-en/2012/09/07/kolor-labs-measurement-using-panoramas-by-alexandre-jenny/
I was thinking that Panocam would actually be an interesting capture device for depth map capture if you had two of them one on top of the other.
Many years ago there was an interactive 360 panorama software called Smoothmove which as the name implied had a very smooth interactive experience where you could right mouse click to move forward or back along a path in a rendered architecture scene and at any point you could stop and look around 360/180. You were on rails but it was natural feeling.
So you could in fact, with a video sequence of nice 360 or 180 stereo panoramas, duplicate this rail experience as a sort of one directional head tracking and it would be compelling and natural seeming with some subjects I think. - cheerioboyExplorermediavr,
As always you're a wealth of information :)
So this seems to further confirm methods of creating depth maps for video - the question will be how well it works if implemented into any of the vr players. The method would possibly already be tested with the original oculus rift if the player supports a head-neck model from which to shift from.
I will try to wait patiently and see... I should also work on more content to eventually test this with :) - j1vvyHonored GuestGreat information.
I spent several hours last night looking for software options to do exactly this.
I want to change my multicamera panorama rig from stereo pairs to cameras pointing out. Use lenses with enough FoV so they overlap and capture the entire sphere 3 times. Use the center of each to generate a pano and the adjacent to calculate the depthmap.
The thing I don't like about depthmap is how the recreating stereo pairs that have objects in the foreground, a little bit of the background needs to be computer generated. Why not warp or compress the background a bit when creating the image? Then expand the background when viewing the image?
I see this being quite easy to implement with CG ray-traced graphics. Imagining the depth map having a gravitational pull when tracing the rays to create the image and doing the reverse when viewing it.
Harder to create from multiple images, but should be doable after the depthmap is created. Unless someone already invented this maybe call it a warp map. - cheerioboyExplorerj1vvy,
Wonderful visual you just gave me with your explanation of warping! Since current methods of vr player uses a sphere to project the panorama. I hadn't thought that the depth map would be used in essence to displace the sphere, pushing the background parts further away. - mediavrProtegeIf you want to try stereo video depth map extraction I notice there is a demo you can apply for from Yuvsoft
http://www.yuvsoft.com/products/stereo-processing-suite-lite/
One of tools in the suite is a depth map extraction tool (actually a difference map tool, but the idea is similar)
Gimpel3d is free and highly featured but I could never get it to work on my Win7 PCs
http://www.gimpel3d.com/
Also Dimenco will send you a trial too of @depth if you ask them I think.
Most professional 3d video conversion is done with Nuke (very expensive) or proprietary tools I think. - cheerioboyExplorerwell I plan to work with 3d renders and automatically creating depth maps for those views - but I wanted to get a feel for depthmaps usefulness in the situation of this new head tracking. I could be an awful solution for all I know. But it's good to know that video folks would also have a way to create these depthmaps for use in a vr player if such a feature that used it was integrated.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 2 years ago
- 2 years ago