cancel
Showing results for 
Search instead for 
Did you mean: 

360 degree panoramic stereoscopic video discussion

cubytes
Protege
Hey Forum,

I am extremely excited about the possibilities of "360 degree panoramic stereoscopic video (360PSV)" for cinematic VR experiences.

however there are some limitations to consider....

limitations such as;
-ginormous file size and expensive video capture gear/rig
-video stitching works (sort of) but stitching creates some noticeable artifacts
-DK2 status positional tracking will easily break the illusion during lean ins and/or head tilts (ear to shoulder).

Additionally for a true cinematic VR type experience....

one would also need to;
-design and build an adequate stage/set
-hire experienced light designers/technicians, directors of photography et all
-hire actors, make up artists, costume designers et all
-and of course hire a director and writer

Thus approaching cinematic VR via 360PSV is kind of like creating a hybrid of a movie/play (so to speak).

With all these limitations and expenses to consider the question is: "why bother working with capturing and stitching 360PSV at all when you can just build it with CG?"

I suppose for me it's about creating an experience that has a very high level of immersion. I am not content with just tricking the low level lizard brain into thinking "whoa I am here". I want to also trick the high level brain into thinking "ok lizard brain is convinced that we are here right now. this is definitely not computer generated. I don't feel anything yet...OMG shizz just got real"

Also a cinematic VR experience is perfect for "seated VR" and would require no input device other then the HMD itself.

This is why I am bound and determined to push cinematic VR experiences forward as much as I possibly can. I also think this medium will appeal and resonate with a much broader audience then gaming.

While these limitations aren't a deal breaker for me I cant help but to dream of ways in which to overcome them.

feel free to discuss anything related to 360PSV here!
9 REPLIES 9

Wolf7115
Honored Guest
I have a feeling that most VR movies are going to be CG. Take Avatar for example, that movie might be pretty good if it were made for VR.

cubytes
Protege
This isn't my area of expertise...

my idea is to utilize project Tango and a custom fork of UR4 to in essence cross-fade between 360PSV and CG in realtime based on positional tracking and other factors...

basically;
-scan the set with project tango prior to capture
-add project tango devices to the capture device itself that slowly rotates during active capture
-have actors wear hidden mocap built into their costumes?
-photoscan the actors

Edit: would it be a good idea to add a motion detection sensor to the capture rig? if there is NO motion in the scene have the tangos rotate? if there IS motion halt tango rotation? idk...

then:
-import meshes into maya, have VFX artists enhance them and animate the scene entirely in CG
-stitch the 360PSV together as adequately as possible

have some custom fork of UR4 built that will allow one to dynamically cross-fade between the two sources in real time based on positional tracking?

so essentially when the observer looks down tilts their heads or leans in this will cross fade from 360PSV to CG and when the observer is sitting up straight this will cross-fade back to 360PSV in real time...

so its like a crazy 360PSV and CG hybrid now.

Edit: also the cross fading doesn't have to be an exact CG replica. it could be an artistic tool as well. like i was wanting to capture some exteriors in Butterfly Dream. so in that Act when the observer looks away from the panoramic video billboard that wraps around them by tilting their head (ear to shoulder) or leaning in or just looking down to see their avatar it will cross fade to a CG scene that has the likeness of the landscape but with some trippy 360 music visualizer VFX instead of an exact CG replica of a whole exterior scene.

but im just thinking out loud so its whatever...

cubytes
Protege
"Wolf7115" wrote:
I have a feeling that most VR movies are going to be CG. Take Avatar for example, that movie might be pretty good if it were made for VR.


yea i can definitely see a lot of cinematic VR experiences being created entirely in CG. but personally i kind of want to leverage the realistic-ness (photo-realistic-ness) of 360PSV to greatly enhance presence/immersion.

but then again im just a writer/enthusiast...

if it comes down to the choice between doing Butterfly Dream (screenplay I am writing) entirely in CG or not doing it at all. i think i would choose the former not the latter.

cubytes
Protege
questions....

what exactly is 360 panoramic stereoscopic video anyways?

how is it different from carefully arranging a bunch of standard cameras in a 360 panoramic capture rig?

whats the difference between a gopro/hero and a regular standard or HD camera?

what kind of camera set up does project tango use? does it use just a regular HD camera but has awesome computer vision software and custom hardware?

how does this work in VR? do you just wrap a video billboard or virtual display around the observer and call it a day? one for each eye? and when the brain forms the two images into one voila stereoscopic 3D? voila presence?

i dont want the video to feel 3D and pop out of the screen. that would be like watching a 3D movie on a theater screen that wraps around the viewer -- like imax on steroids. i want the observer to feel truly present much like being in a CG rendered scene with virtual cameras...

could you use negative and positive parallax to give the video billboard depth or pop out effect as if to mimic what it would look like if you lean forwards and backwards but on a virtual screen instead of a full 3D rendered scene?

how about taking the video and an exact CG replica scene and cross fading from one to the other depending on user orientation and positional tracking data? would it even be worth it? if you use project tango you wouldn't have to model everything from scratch...

cubytes
Protege
one of the ideas i have for Butterfly Dream is to use live video as the "real world" and use CG as the "dreamworld" or some subconsciousness where mind blowing stuff occurs. and by fading from live video to fully CG then back to live video the experience would kind of ebb and flow.

then i was like what if you could take that same ebb and flow idea and make it real time to further increase immersion/perception?

live video and a CG replica parallel side by side moment to moment cross fading between the two via positional tracking in real time...

there are several reason i want to do this;
-be able to project an avatar into the experience
-so positional tracking doesn't completely break the illusion
-actors would be able to interact with the observer's avatar

i suppose one could use the same idea not as a complete solution but more as a supplemental solution....

for instance instead of cross fading between live video and an exact CG replica one could just cross fade from live video to a CG title screen or "pause" screen that would pause the live action video until the observer returns to an adequate viewing position.

MrMonkeybat
Explorer
To do 360 video well I think we need to record the depth information with either Lidar or calculating it from the parallax between the multiple cameras.

The distance in 360 video beyond stereoscopic range can be done with a simple 2d cube map with the mid ground done with parallax mapping the cubemap, but to do the foreground properly it needs to be played back as as voxels or pointclouds. as I said on the other thread.
viewtopic.php?f=26&t=10220
"mrmonkeybat" wrote:
The hairy ball problem does not seem that bad if you are not worried about strait up and strait down. But I am thinking of lots off near 180 degree lenses spread evenly across a geodesic dome.

Then use the paralax between the lenses (or maybe lidar) to calculate the depth of each pixel and turn it into a point cloud or voxel map. So that the camera view of any position in the DK2s tracking volume can be calculated.

Yes that would be expensive and require allot of compute power but some similar things have been done, so I my not be completely in cuckoo land. Lytro have made a camera with a micro lens array creating lots of small different views of the same scene from which they compute the 3d light field. There have been some 3d TV prototypes that compute the camera views between two wider camera recordings in real time. A similar technique also used on the bullet time sequences in The Matrix. Eucludeon have demonstrated a software program called Geoverse which can view large point cloud data in real time. A film would require a new point cloud each frame but the precision required would drop off with distance so it could be quite compressed.

You could also do typical mpeg compression writing the difference between frames.

Disdroid
Honored Guest
I think this topic was brought up before.
What it boils down to is basically that to capture a real PSV or VRMovie for a DK2+ you'd need 6 or maybe with some magic 4 3d 360° cameras. For a DK1 you needed only 1. This is a mad amount of data that you have to have available at all time to stitch together a true picture for the Riftscreen. Also you have the proplem that cameras capture each other: more stitching. Don't forget that every single camera is already stitching it's pictures together.
We need some skillfull engineers if we end up doing it like that.

Capturing just the shape and form of things and adding texture to it later seems much easier today. Which is exactly what they did in Avatar. I'm not happy with the shape-capturing-technology of today though.
If we simply restrict the spectator to hold his head still while enjoying live content i'ts all much much easier.
People tend to believe what they want to be true.

cubytes
Protege
personally my two concerns are;
-avatar
-positional tracking


avatar:
i would like to get an avatar into the live video experience. wondering it if would be best to do so with CG or perhaps build some robotic android prop with a capture rig as its head and have some operator control it from behind the scenes. although you wouldn't really need to move it around all that much and wouldn't need to worry about turning its head. for the most part it would just be there for frame of reference and there for the actors to interact with.

positional tracking
it would be nice to allow for some lean in (within reason of course). which is why i was thinking of cross fading between live video and CG replica scanned via project tango. but i worry that solution would not be possible and if it is possible it probably wouldn't be very cost effective anyways. if the same effect can be done to some extent with a depth and anti depth parallax illusion or by some technical solution via point cloud mapping that would be great. if not no big of a deal...

Disdroid
Honored Guest
Sometimes i wonder if you couldn't map the room with echolocation and have a single camera fill in the texture.
People tend to believe what they want to be true.