Forum Discussion
cheerioboy
12 years agoExplorer
3d generated imagery with depth map
Anyone know a good way to view these? I've been doing some tests with rendering 3d 360 still images. Since I believe it's not possible at the moment to create a pair of 360 renders, I'd be happy just being able to shift the views with the depth pass. Does the depth pass need to be a specific depth to generate the correct image?
I'm using vray with 3dsmax, in case someone want's to code me a special 3d camera that renders a pair of 360 degree images :)
I'm using vray with 3dsmax, in case someone want's to code me a special 3d camera that renders a pair of 360 degree images :)
19 Replies
- PatHightreeHonored GuestFunny, just today I was thinking along the same lines.
I think the depth pass has enough information to create a stereoscopic image, but for that you need to perform a stereoscopic reprojection.
Here's a page with a white paper, demo and an example shader.
http://www.marries.nl/graphics-research/stereoscopic-3d/
There was also an article in the game developer magazine about this, but can't find it right now.
Please be aware that this process is not perfect because there are areas in the output image that were missing in the input image because they were occluded.
I'm a Unity developer, so I won't be coding max plugins but I'll post here if I get something working. - cheerioboyExplorerInteresting approach. I'm familiar with doing stereo reprojection in Nuke, using a displace node and the zdepth. Although I'm not familiar with if there's a 'correct' amount to displace depending on the depth in the depth map to get correct depth. I've always just done it by eye, I'd need to first test it with anaglyph glasses on my monitor before exporting.
But that's a great workflow idea, thanks for sharing some input! - mediavrProtegeVR Player supports depth maps in over/under and side by side mode ... under the Effects menu
- cheerioboyExplorer
"mediavr" wrote:
VR Player supports depth maps in over/under and side by side mode ... under the Effects menu
Yep. This is what I've been using currently - although the most recent update removed an offset slider for the over/under distortion. So now I'm not sure what I need to do. Hopefully it'll be re-implemented.
With VR Player I'm still just eyeballing offset to find something that is comfortable to look at. But does that translate into the correct depth within the image? - andrew3dHonored GuestCould you just physically set up 2 cameras and render out 2 images side by side? or does this still go along the lines of eyeballing it? For example use the vray physical camera and emulate the human FOV and average eye distance. I guess it has nothing to do with a depth map but possible a solution for the same problem for a still image.
edit> guess this suggestion is irrelevant to the topic, but still curious if anyone has tried this approach. - cheerioboyExplorer
"andrew3d" wrote:
Could you just physically set up 2 cameras and render out 2 images side by side? or does this still go along the lines of eyeballing it? For example use the vray physical camera and emulate the human FOV and average eye distance. I guess it has nothing to do with a depth map but possible a solution for the same problem for a still image.
edit> guess this suggestion is irrelevant to the topic, but still curious if anyone has tried this approach.
I haven't tried this approach yet because I believe that although setting up two cameras might give you the proper view looking forward, as you turn either direction 90 degrees the views will align and the depth perception will fade. I believe you need a more complex setup with either a special camera, coded to only render slivers of the space as it rotates on an axis point, or manually render multiple angles and project/stitch them together.
http://www.stereopanoramas.com/blog/
Peter Murphy has been inspiring my thoughts, trying to take the process of dslr stereo photography and use it with 3d rendering - mediavrProtegeActually it is possible to render stereo panoramas from CG environments by slice assembly
http://paulbourke.net/stereographics/stereopanoramic/
http://paulbourke.net/papers/omnistereo/omnistereo.pdf
but you have issues -- identical to those confronting viewing of real world stereo panoramas captured the usual ways -- with very tilted or very wide angle views. So that nadir and zenith views, for instance, have stereo then pseudostereo effects as you rotate the view (still looking up or down) 180 degrees. So depth maps are a good potential answer, especially now we have VR Player supporting them. Another approach would be to render (or capture from the real world) hundreds of stereo views in all directions and then interpolate views as the user navigates through them.
PeterM - AnonymousI heard a talk about this at a conference a while back (see link below). For viewing high resolution panoramas like gigapans I think the problem is that you need to change the spacing between eyes for any given part of the picture. Gigapans use tiles like google maps and other open-zoom type formats. As you zoom in to a part of your panorama from far away, the overlap of the tiles needs to shift dynamically. The 3dpano guys got around this by having you move your mouse to the part of the image your were looking at and they'd calculate the separation between the two image on the fly so it was always correct. So I imagine you could do something similar with you pano viewing software in the rift where you figures out how far about you want your two images on the fly. More info an that project here:
http://3dpan.blogspot.com.au/2010/11/3dpanorg-goes-live.html
and
https://code.google.com/p/stereogigapan/ - mediavrProtege
For viewing high resolution panoramas like gigapans I think the problem is that you need to change the spacing between eyes for any given part of the picture.
Yes that is true about high resolution stereo panoramas and zooming in. But there is another, different, less solvable, problem to do with very wide angle and very tilted stereo views of stereo 360 panoramas generated by assembling vertical slices from multiple horizontal stereo equirectangular images derived from rotating fisheye camera(s) -- which is the usual way stereo spherical images or renders are produced.
The gigapan stereo images they have applied their zooming technique to are of limited vertical extent. For the Rift spherical or at least vertically wide images are the immersive ones. - cheerioboyExplorer
"mediavr" wrote:
Actually it is possible to render stereo panoramas from CG environments by slice assembly
http://paulbourke.net/stereographics/stereopanoramic/
http://paulbourke.net/papers/omnistereo/omnistereo.pdf
but you have issues -- identical to those confronting viewing of real world stereo panoramas captured the usual ways -- with very tilted or very wide angle views. So that nadir and zenith views, for instance, have stereo then pseudostereo effects as you rotate the view (still looking up or down) 180 degrees. So depth maps are a good potential answer, especially now we have VR Player supporting them. Another approach would be to render (or capture from the real world) hundreds of stereo views in all directions and then interpolate views as the user navigates through them.
PeterM
Awesome links, thanks for sharing - I'm going to look into some manual approaches - taking renderings and doing post work to get them working - otherwise I hope that one day I can find a programmer to create a custom vray camera or lens shader that does this work for me :!:
I'll post more images if I come up with anything new
thanks!
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 2 years agoAnonymous