Forum Discussion
HD148478
11 years agoHonored Guest
Dual GoPros and distance between lenses
Hi there,
I'd like to capture 3D videos but before investing on two gopros and their dual mount system, I would like to have one question answered:
The distance between lenses in that system (http://shop.gopro.com/EMEA/accessories/dual-hero-system/AHD3D-301.html) seems to be very short (compared to the distance between the eyes on average people's face.) Is that really enough space to create a 3D effect?
I was also wondering if there's a way to calculate how far the effect is going to be present, before a distant object "becomes 2D". I assume the more distance between lenses, the farthest the 3D effect can reach but this must have limitations right?
Thank you.
I'd like to capture 3D videos but before investing on two gopros and their dual mount system, I would like to have one question answered:
The distance between lenses in that system (http://shop.gopro.com/EMEA/accessories/dual-hero-system/AHD3D-301.html) seems to be very short (compared to the distance between the eyes on average people's face.) Is that really enough space to create a 3D effect?
I was also wondering if there's a way to calculate how far the effect is going to be present, before a distant object "becomes 2D". I assume the more distance between lenses, the farthest the 3D effect can reach but this must have limitations right?
Thank you.
12 Replies
- HD148478Honored GuestWell, I kind of found the answer:
http://www.shapeways.com/model/1658987/gopro-hero-3-3d-system-wider-lens-separation.html?modelId=1658987&materialId=6
Looks like it's better to separate more the lenses when you shoot from 10 feet away or more. This is especially important for drone footage (the official gopro dual hero doesn't deliver the best results in this context because of lenses being too close for such distant recordings.) - FredzExplorerIf you want to shoot for the Rift you should have an orthostereoscopic configuration, ie. the interaxial between the camera should be equal to the IPD of the viewer. The default interaxial is too small with the GoPro but the shapeways model is too big. The average IPD is around 65 mm so that's what you should shoot for.
- HaroldddMemberThe ideal separation between lenses depends on the distance to subject. Stereo-photographers use a rule of thumb of 1 unit of separation for every 30 units to the closest or object of interest. If you use a lens separation equal to average human eye separation, then you can't shoot much closer than about 6.25 feet. If you want depth for objects at a distance, you increase the separation by the 1:30 rule.
You might notice that many recent stereoscopic cameras have much closer lens spacing, like the HTC Evo 3D smart phone at about 32mm. That is deliberate. They know most users will want to shoot closer than 6.25 feet (i.e. selfies) so they have lenses about half human eye separation. The downside is reduced depth effect at normal distances or group photos. I have an HTC Evo 3D and use it just for closeups where normal lens separation would be too much.
If you can control inter axial distance you can adjust based on the scene. If you can't make a dual camera bracket that allows variable distance, then fix them at about 65mm (average human eye separation) because that will give the best results in most scenes. Of course if you're shooting from a drone at 200 feet, that would give very little depth effect. - FredzExplorer
"Harolddd" wrote:
The ideal separation between lenses depends on the distance to subject.
I also own a HTC 3D Evo and a Sony Bloggie 3D and I agree that was the case for viewing stereoscopic images on 3D displays, but with VR it's different. You can read about the reasons why orthostereoscopy is needed for virtual reality here : http://www.leepvr.com/spie1990.php - GusevHonored GuestI want to achieve the same. In another topic viewtopic.php?f=33&t=5622 there is a lot of information about S3D 180 degree setups.
@OP: Have you made a decision? - FredzExplorer
"HD148478" wrote:
I was also wondering if there's a way to calculate how far the effect is going to be present, before a distant object "becomes 2D". I assume the more distance between lenses, the farthest the 3D effect can reach but this must have limitations right?
Forgot to answer that one.
Stereopsis is the dominant cue for depth perception up to ~10m when there is motion parallax and up to ~65m when there is no motion parallax.
From http://mallorea.student.utwente.nl/~jorg/Publications/AIAC13_2008JorgEntzinger.pdf :
"It was found that stereopsis can be used as a depth cue up to several hundred meters. However, in situations where monocular cues are available, its practical limit should be expected to be 20–65m and stereopsis is very unlikely to be of significant importance beyond 100m. These limits imply that for small aircraft, helicopters and ground operations stereopsis will be beneficial. For simulation or human pilot modeling regarding the landing of mid-sized jet aircraft stereopsis can be ignored."
From Depth perception in computer graphics :
- mediavrProtegeConsider dual parallel level Gopros with lenses that that record 180 degrees. If you convert those fisheye circles to equirectangular 180/180 squares they look like this: https://www.youtube.com/watch?v=GByN_ru0eT4 (there is a little bit of masking by the camera body causing the rounded corners). These movies in this form have good horizontal alignment for the features in the movie. But if you view them in an interactive viewer you are going to get vertical disparities (and exaggerated horizontal disparities) with wide angle views to either side especially if the views are tilted. ie. the corners will have defective stereo impression. This is an intractable problem with regular mappings of panoramic views. With this movie I have reduced the disparity in the corner areas from the original equirectangular images so the depth impression is less but also the errors are less and the net result is greater 3d comfort. I reduced the disparity by using Viewpoint optimization as a second optimization step in PTGui in aligning the images which in certain modes can as a "rubber sheeting" warping tool to reduce disparities in the corners.
So what I am saying is it is all very well to maintain that you need a separation that provides an orthoscopic view for virtual reality use but with parallel fisheye capture that does not apply and you need less than an orthoscopic view prescribed separation. In this case I think the separation was about 6cm I think.
If you want to see it a LiveViewRift etc here it is
http://www.dropbox.com/s/t9acnrimotqk94m/newtownfestequisbs180.mp4 - HaroldddMemberFredz with all due respect, you're just wrong. If what you say is true then hyper and hypo (macro) VR would be impossible, and it is not. In spite of taking a chart out of a dissertation. Your referenced article, and chart, both assume a human being in a natural environment. That is not a constraint for us. You can take the chart and multiple everything by 10, and the distance to convergence (~6.25 feet times 10) will remain the same proportional distance. Ortho is a myth. You could have cameras miles apart.
This is easy enough to settle. Just render two stereo image pairs. As described in this thread:
viewtopic.php?f=33&t=17444
Render one pair "ortho" (what ever that would be depends on your arbitrary choice of scale) then using twice the "ortho" separation but not closer than 1:30 to objects.View them both. If you don't violate the 1:30 ratio it will be just as easy to view. The primary difference is that everything will seem larger when the virtual cameras are closer, and smaller when they are further apart. If using parallel cameras you will need to shift the images to keep infinity from forcing the eyes to diverge.
Then again I could be wrong, but I don't think so. - FredzExplorer
"Harolddd" wrote:
If what you say is true then hyper and hypo (macro) VR would be impossible, and it is not.
I didn't say hyperstereo/hypostereo was not possible, it clearly is. I said that orthostereoscopy is necessary to feel immersed, so that every depth cue corresponds to what you can see in real life. If you want a doll-house or giant effect it's not a problem and it can make a sound artistic choice, but the immersion will be diminished. - HaroldddMemberOkay, I guess we don't really disagree. 8-) You are right that video using hyper stereo base would seem unnatural and would be less immersive because it would draw attention to the difference in scale.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 8 months ago
- 2 years ago
- 2 years ago