Forum Discussion
AGr_
13 years agoHonored Guest
3D Streetview?
I am interested in trying to capture a static image sphere using 2 cameras in order to create a streetview like stereoscopic 3d experience, where the user wearing the Rift can pan around the image by ...
Bruce
13 years agoHonored Guest
Shooting and stitching spherical panoramic images captured from a nodal position is a standard method of panoramic photographers. To extend this method to stereoscopic panoramas, I have moved the camera from the nodal position to the left and to the right, as if the camera were on a stereo bar. The bar rotates about the nodal position in each panoramic sequence of shots. (This is equivalent to viewing a scene where you are looking all around, and your head is turning about a point located between your two eyes.) For my Oculus Rift tests this provides three sequences of images suitable for stitching into three stereoscopically-related spherical panoramas.
When stitched together, the three spherical panoramas (left, nodal/center, and right) are each encoded in the standard 2:1 format corresponding to the 360 degree:180 degree capture. This 2:1 data must now be fed to the left and right eye displays of the Oculus Rift. These three stereoscopically-related spherical panoramas can also display the effects of the different lens separations, which should be of interest.
HOW TO FEED PHOTOGRAPHICALLY-DERIVED SPHERICAL IMAGES TO THE OCULUS RIFT is what I am now examining.
(For me, combining a multitude of subject-related images to create a 3d computer graphic space [which is an extraordinary activity and accomplishment!] is different from the task of simply displaying spherical stereoscopic images viewable on head-tracked goggles.)
For game makers my task may seem trivial: feeding photographically-obtained left and right-eye images to the Oculus Rift to generate a static stereo CG image, a 3d stereo pair of spherical panoramas suitable for head-tracking display.
For photographers or movie makers, there may be something else to consider. For instance, the differences between a spherical image captured in the described, standard photographic way and stitched together, versus a spherical image captured in a non-standard way: using a rotating slit camera (on a stereo bar) with a fisheye lens and a spherical, linear sensor array to obtain the same images WITHOUT stitching. Such slit cameras do exist. And with such a camera you would make, for comparison, three non-stitched images matching the three stitched images already described. This would allow you to look for artifacts resulting from stitching. This concerns me because I would expect stitching errors to be a problem, but not a show-stopper.
(Concerning the Google Street-Views, these are shot with a different, curved mirror capture method. I look forward to also including this approach, but I am currently using available equipment that can produce panoramas of high resolution suitable for movies and whatever.)
Bruce Lane
When stitched together, the three spherical panoramas (left, nodal/center, and right) are each encoded in the standard 2:1 format corresponding to the 360 degree:180 degree capture. This 2:1 data must now be fed to the left and right eye displays of the Oculus Rift. These three stereoscopically-related spherical panoramas can also display the effects of the different lens separations, which should be of interest.
HOW TO FEED PHOTOGRAPHICALLY-DERIVED SPHERICAL IMAGES TO THE OCULUS RIFT is what I am now examining.
(For me, combining a multitude of subject-related images to create a 3d computer graphic space [which is an extraordinary activity and accomplishment!] is different from the task of simply displaying spherical stereoscopic images viewable on head-tracked goggles.)
For game makers my task may seem trivial: feeding photographically-obtained left and right-eye images to the Oculus Rift to generate a static stereo CG image, a 3d stereo pair of spherical panoramas suitable for head-tracking display.
For photographers or movie makers, there may be something else to consider. For instance, the differences between a spherical image captured in the described, standard photographic way and stitched together, versus a spherical image captured in a non-standard way: using a rotating slit camera (on a stereo bar) with a fisheye lens and a spherical, linear sensor array to obtain the same images WITHOUT stitching. Such slit cameras do exist. And with such a camera you would make, for comparison, three non-stitched images matching the three stitched images already described. This would allow you to look for artifacts resulting from stitching. This concerns me because I would expect stitching errors to be a problem, but not a show-stopper.
(Concerning the Google Street-Views, these are shot with a different, curved mirror capture method. I look forward to also including this approach, but I am currently using available equipment that can produce panoramas of high resolution suitable for movies and whatever.)
Bruce Lane
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 14 days ago
- 8 months ago