Forum Discussion
AGr_
13 years agoHonored Guest
3D Streetview?
I am interested in trying to capture a static image sphere using 2 cameras in order to create a streetview like stereoscopic 3d experience, where the user wearing the Rift can pan around the image by just looking around.
Does anyone know of examples of existing implementations? My initial thoughts are that you would need a lot more data than simply 2 image spheres. So I was thinking of having 2 cameras on a head/neck model and pivot them around to take lots of images at different view angles, and then using those reference images synthesis new images for each eye depending on where exactly the user is currently looking. Can anyone suggest some reading for how best to do that image interpolation and what kind of image reprojections might be needed?
I know google streetview had an anaglyph mode for awhile, but from what I have read that was generated by combining a single image with depth data from the laser scanner, so it's not exactly the same approach.
Does anyone know of examples of existing implementations? My initial thoughts are that you would need a lot more data than simply 2 image spheres. So I was thinking of having 2 cameras on a head/neck model and pivot them around to take lots of images at different view angles, and then using those reference images synthesis new images for each eye depending on where exactly the user is currently looking. Can anyone suggest some reading for how best to do that image interpolation and what kind of image reprojections might be needed?
I know google streetview had an anaglyph mode for awhile, but from what I have read that was generated by combining a single image with depth data from the laser scanner, so it's not exactly the same approach.
18 Replies
- ColthHonored GuestGuess it's not exactly what you are looking for but I remember a project from Microsoft Research where you could just upload tons of photos from a particular area and it was able to place and connect them in a three dimensional space. Maybe that could help or inspire?
http://photosynth.net/
http://research.microsoft.com/en-us/um/ ... hotoTours/ - edziebaHonored Guest
"brandonagr" wrote:
You would be correct. I'm not sure if using a spaced-camera rig and taking a whole lot of images at different orientations would work, but one approach would be to use a single 360° capture along with a depth map to build a model of the scene which could then be rendered to provide depth.
My initial thoughts are that you would need a lot more data than simply 2 image spheres. - AGr_Honored GuestSimply rendering an already recreated 3d model(either generated from stereo images, kinect, or using something like photosynth) would probably be easier, but then I would be limited by scene complexity and how well it could be captured/reconstructed, I was hoping to have much more detail by doing everything in image space with reference images. My initial motivation was to create stereoscopic videos with a simple camera rig, but just a video in the Rift would not be as immersive as the user actually being able to look around in the environment. Of course I don't even know if my capturing tons of reference images idea is feasible, I was expecting to find examples of people who have already done something similar.
This is the closest paper I have found to what I was thinking of for generating new images, Physically-Valid View Synthesis by Image Interpolation. Most of the papers I have found are related to synthesizing new views from wide baseline cameras for use in teleconferencing applications. - PatrickshirleyHonored GuestSo I actually tried this a couple of months back with two iPhones set up side by side using two slightly offset captured panospheres . The main problem I encountered doing this was as you turned 360 degrees the 'eyes' inverted. In the end the workaround involved blending the images in photoshop to maintain left eye right eye position. It ended up working pretty well.
Perhaps an option is set up a dual camera capture rig so that when you turn the tripod left and right eye views are maintained in place and capture at a relatively long focal length so there's more steps to the stitch. Then somehow set up visibility options with the two panos on two spheres visible to one eye camera each in software.
Not really gaming but it works great in early concept architectural vis where you want to control where the client is seeing things from. - mwilcoxHonored Guest
- AGr_Honored Guest
"mwilcox" wrote:
http://oculusstreetview.eu.pn/
That looks interesting, is it just taking GSVPano.js and rendering the webgl sphere from each eye position? Wouldn't that not have any stereoscopic effect/depth information when viewed with an oculus(other than looking like you are inside a sphere textured as a panorama)? Of course I don't have mine yet so I can't test it out - QPORITExplorerTo make a standard 3D video from stills:
You do not need a stereo rig. Since you are taking a series of pictures, the nth picture Pn and the n+1st picture Pn+1 form a stereo pair (if they are the right distance apart).
A series of pictures forms a movie. The series displaced by one forms the pictures for the other eye. The left eye images and the right eye images must be appropriately combined. Some 3D video editors may be able to do this easily. You should be able to do it manually.
The eye is moving to see the object from different angles.
This is a stereo view. Constructing a virtual 3D model needs additional information and processing. But in any case, a stereo camera may not be necessary. - AGr_Honored GuestFinally found some examples, refered to as omnistereoscopic panoramas/images
http://csc.lsu.edu/~kooima/research.html
http://www.ben-ezra.org/omnistereo/omni.html
http://www.luisgurrieri.net/publications/spcamera/ - YamHonored GuestSadly the only real way to do this effectively in my opinion is to build a 3d world.
This is what i believe Goggle street view stereo is actually does, it works well for flat surfaces like buildings but if you look at trees or anything vaguely organic you can tell it is not a true stereo image.
edit: My initial skepticism I think is unfounded, looking at the links above you would need to be really careful about the position of the pivot point of both cameras but it does appear very possible (although I have no idea what would happen if you tilt your head and not sure what it would do to the stitching of the images). - BruceHonored GuestShooting and stitching spherical panoramic images captured from a nodal position is a standard method of panoramic photographers. To extend this method to stereoscopic panoramas, I have moved the camera from the nodal position to the left and to the right, as if the camera were on a stereo bar. The bar rotates about the nodal position in each panoramic sequence of shots. (This is equivalent to viewing a scene where you are looking all around, and your head is turning about a point located between your two eyes.) For my Oculus Rift tests this provides three sequences of images suitable for stitching into three stereoscopically-related spherical panoramas.
When stitched together, the three spherical panoramas (left, nodal/center, and right) are each encoded in the standard 2:1 format corresponding to the 360 degree:180 degree capture. This 2:1 data must now be fed to the left and right eye displays of the Oculus Rift. These three stereoscopically-related spherical panoramas can also display the effects of the different lens separations, which should be of interest.
HOW TO FEED PHOTOGRAPHICALLY-DERIVED SPHERICAL IMAGES TO THE OCULUS RIFT is what I am now examining.
(For me, combining a multitude of subject-related images to create a 3d computer graphic space [which is an extraordinary activity and accomplishment!] is different from the task of simply displaying spherical stereoscopic images viewable on head-tracked goggles.)
For game makers my task may seem trivial: feeding photographically-obtained left and right-eye images to the Oculus Rift to generate a static stereo CG image, a 3d stereo pair of spherical panoramas suitable for head-tracking display.
For photographers or movie makers, there may be something else to consider. For instance, the differences between a spherical image captured in the described, standard photographic way and stitched together, versus a spherical image captured in a non-standard way: using a rotating slit camera (on a stereo bar) with a fisheye lens and a spherical, linear sensor array to obtain the same images WITHOUT stitching. Such slit cameras do exist. And with such a camera you would make, for comparison, three non-stitched images matching the three stitched images already described. This would allow you to look for artifacts resulting from stitching. This concerns me because I would expect stitching errors to be a problem, but not a show-stopper.
(Concerning the Google Street-Views, these are shot with a different, curved mirror capture method. I look forward to also including this approach, but I am currently using available equipment that can produce panoramas of high resolution suitable for movies and whatever.)
Bruce Lane
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 4 months ago
- 1 year ago
- 4 years agoAnonymous