Forum Discussion
chapdelv
11 years agoHonored Guest
360 S3D camera for the OculusRift
Greetings!
Here is a bit of info on the 360 S3D camera setup that I am using to capture movies for the OculusRift. The setup is composed of 3 cameras each with a fisheye lens. Each camera captures 24 fps (12-bit colors) at 2k x 2k pixel resolution. The cameras are synchronized within 0.1 ms.
The big advantage of this design is that it allows a real-time image stitching method based on epipolar geometry which minimizes misalignments due to parallax. In fact, points along the epipolar lines align perfectly horizontally regardless of their depth. There remains small vertical misalignments vertically but you can remove them for a chosen depth.
You can see a sequence I shot with the camera here:
http://www.iximage.com/street-view-sequence/?lang=en
You can also find more info on the stitching method here:
http://nasv3d.iro.umontreal.ca/chapdelv/pdf/360-s3d-camera.pdf
or the camera itself here:
http://www.iximage.com/?lang=en#technology
Thanks for your feedback.
Cheers.
-Vincent
Here is a bit of info on the 360 S3D camera setup that I am using to capture movies for the OculusRift. The setup is composed of 3 cameras each with a fisheye lens. Each camera captures 24 fps (12-bit colors) at 2k x 2k pixel resolution. The cameras are synchronized within 0.1 ms.
The big advantage of this design is that it allows a real-time image stitching method based on epipolar geometry which minimizes misalignments due to parallax. In fact, points along the epipolar lines align perfectly horizontally regardless of their depth. There remains small vertical misalignments vertically but you can remove them for a chosen depth.
You can see a sequence I shot with the camera here:
http://www.iximage.com/street-view-sequence/?lang=en
You can also find more info on the stitching method here:
http://nasv3d.iro.umontreal.ca/chapdelv/pdf/360-s3d-camera.pdf
or the camera itself here:
http://www.iximage.com/?lang=en#technology
Thanks for your feedback.
Cheers.
-Vincent
36 Replies
- mediavrProtegeThis is really interesting! It must look great in a dome with 3d projection. It works good in VR Player. I made a post about it on the Oculus subReddit
http://www.reddit.com/r/oculus/comments/296zqx/new_hemispherical_video_capture_technology_iximage/
Interesting to see that you are using the Fujinon C-mount fisheyes
http://www.fujifilmusa.com/products/optical_devices/security/fish-eye/index.html
Did you try any others? - j1vvyHonored GuestVincent,
It has been several year, I still remember some of your earlier work on 360° video that you showed in the multi-projector setup.
This looks really good.The seams are not obvious at all.
Does this only work with the cameras pointing parallel to each other, or can they be tilted to get more vertical FoV?
Reading the paper now.
Jim - kclaiHonored GuestVery interesting! the seams were hard to find except the overhead wirings, just the color was a bit washed out. Shall take my gopro+fisheye for another experiment like this in the weekend
http://ge.tt/3GcAnam1/v/0
Cheers,
KC - chapdelvHonored GuestHi Jim,
I haven't figured out yet a good way to tilt the cameras when only using 3 of them. But I have tested other lenses that have a wider FOV. For instance, I tested Opteka Vortex lenses that have about a 220 degree FOV. But they are not small enough to do proper stereo (i.e. the baseline is too large). If only I could find 220 degree C-mount fisheye lenses for 1" sensors...
-Vincent - mediavrProtege
If only I could find 220 degree C-mount fisheye lenses for 1" sensors...
Lensation make a 240 degree M12 lens -- so a Gopro version of your rig is conceivable --
Sensing Area: 1/3"
• Focal Length: 0.981mm
• Back Focal Length: 5.73mm
• F/NO: 2.8
• Iris: Fixed
• Image Circle: 3.669mm
• Lens Construction:
9 components, 8 groups
• Field Angle(Horizontal): 240º
• Min. Object Distance: 0.05mm
• Weight: 94g
• Day & Night Lens
http://www.lensation.de/downloads/LS_CAT_2013.pdf (page 12)
Also there are 2X teleconvertors available for C mount eg. http://www.bhphotovideo.com/c/product/889274-REG/computar_ex2c_Extender_2X_for_C_Mount.html -- making an image circle with the lensation lens of 7.3mm.
Since the height of the 1" sensor is 8.8mm this means you would not be wasting too much resolution
http://photo.stackexchange.com/questions/24952/why-is-a-1-sensor-actually-13-2-%C3%97-8-8mm - chapdelvHonored Guestmediavr,
The 240 degree Lensation lens is pretty impressive! Thanks for the heads-up.
The sensor in my cameras is actually square (11.26 mm x 11.26 mm), so an image circle of 7.3 mm is a bit small but I'll try to see how I can make this work.
-Vincent - NukemarineRising StarJust looked at it through VR Player on Full Dome with a 210 coverage. I'm going to be critical, but I like what you're doing. The distortion at eyeline just ruins it. It does no good to have S3D when the result is worse than if you just showed a much cleaner mono video with less stitching artifacts. The company is on the right track, but three cameras apparently are not enough for this.
Though tilting the camera from 30 to 60 degrees should help, I wonder how much it could help. As you said, the zenith has no S3D. However, with tilting the most of the artifacts will be behind the viewer and only noticeable when the look far left and right. Plus, tilting removes any benefit for S3D since it's not perpendicular to the eyes anymore. Of course, you can put the cameras with FOV on the same plane but then you're just making what other companies make.
Hopefully the company keeps at it, but this does not seem like a product I would recommend.
*Edit - as noted in later replies, changing VR Player to display slices and stacks of 64 or higher removed the distortion issue. - chapdelvHonored Guest
"nukemarine" wrote:
The distortion at eyeline just ruins it.
Thanks nukemarine for the tough love:) Ignoring the warping introduced by using 210 as FOV instead of 190, could you be a bit more specific about the "distortion at the eyeline" you are seeing? Do you mean perceived depth distortions? If so, you can take a look at the link below where it is shown that using only 3 cameras introduces depth distortions, but that these are almost completely removed when using 5 cameras (see fig. 9):
http://www.iro.umontreal.ca/~chapdelv/pdf/couture-iccp2013.pdf
So if one has enough money to use 5 cameras, then depth distortions should be very small.
You also mention that mono videos are cleaner, and you might prefer that, but sometimes you need S3D, for viewing purposes or in robotic applications for depth analysis. Do you mean cleaner because of the higher resolution or because of less stitching artifacts? If you meant the latter, I would be grateful if you could point out a specific time or location in the video where these were bothering to you.
Thanks.
-Vincent - NukemarineRising StarI didn't notice you had a blog post on settings for the VR Player, but the difference between 190 and 210 are minimal and only affect stretching. That's not the distortion I was talking about.
The distortions at eyeline are the stitching distortions that make it look like waves about 15 degrees apart that are running over everything passing by on the street. I don't notice as much with the forward facing movement, but that's only because the parallax between objects is decreased. Nothing can be done about it as you said since it's a three camera system.
The paper you showed was good at demonstrating how even five cameras approach the ideal rotating parallel cameras shown in figure 2. I would argue though that the ideal presented is not in tune with human vision though. Human eyes sweep on a 20cm radius arc about 65mm apart. That should slightly alter the formula used, but the general idea of the paper remains the same. If you look at figure 6, you would add a center circle that represents the sweep of the head which the two parallel cameras will travel across instead of pivoting around a point as it does now.
This may not seem like an important point, but if you use the Rift, that's a big reason they used a neck model with regards to tilting the head. The eyes don't normally pivot around the nose bridge. If you look at the Rift and try to pivot your eyes around your nose, it'll seem like everything is moving horizontal slightly.
Thanks for the paper by the way. I like how it wants to access all viewing angles of each camera. That's very clever. The downside is I assume is that you're not used the best part of the camera (the center view) to capture the desired content which is on the horizon. On the other hand, two systems capture a full 360 sphere assuming one set can work upside down. Also, with more than 3 cameras such as the 4 and 5 camera layout in figure 9, I assume that you could angle the cameras further down. This allows more vertical coverage at slight cost to occlusion (the "bowtie" gets lopsided).
As for mono being cleaner, I just mean that non-S3D are easier to stitch. Though really, it's that they have more cameras to reduce the stitching artifacts. It's not a fair comparison since I think this method could be a workable method. - chapdelvHonored Guest
"mediavr" wrote:
Lensation make a 240 degree M12 lens -- so a Gopro version of your rig is conceivable --
Unfortunately, the guys at Lensation just replied back that they no longer sell this 240 degree lens... :cry:
-Vincent
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 8 months ago
- 11 years ago
- 2 years ago