cancel
Showing results for 
Search instead for 
Did you mean: 

Week 7 : Oculus Launch Pad 2018 - Due Aug 20 @ 11am PST

Anonymous
Not applicable
You've made it this far - the team internally are looking through all of the MidCheck In s and they are looking great. A few trends are lots of you thought out budget really well and exploring multiplayer! We are getting excited 🙂 
100 REPLIES 100

Anonymous
Not applicable

Last week we shared our camera selection process, as well as what we learned from it. What we want to cover in more detail today is how we dealt with the inevitable compromises we had to accept, because no integrated 360 camera solution is perfect (or even “very solid”) at this point. After careful evaluation, we decided to go with Z Cam V1 Pro, which we felt delivered on most fronts, but with “8K” resolution it can only shoot at 29.97 frames per second (fps). As you could figure out from the title of this blog post - this is quite low for the kind of movement arts performance we aspired to create.


Why compromise?

Experienced filmmakers recommend to shoot in at least 50 FPS for any “dance-like” movement, and 120 FPS is typically desirable for fast movement, such as sports activities. However, using high frame rates impose a set of challenges in VR content:

  • Camera limitations is the biggest factor, as some cameras might not even support the frame rate you’re hoping to achieve. And even if you have the best camera - most consumer hardware will struggle to play high-fps footage anyways;

  • Higher number of frames naturally needs more storage, so you’ll have to spend more time backing up cards. Make sure you have a big enough hard drive to keep everything and work with all that footage after;

  • Post production costs also goes up, with some operations (like rotoscoping) growing in cost as fast as linearly with the number of frames per second;

  • Higher frame rates also mean shorter exposure, which does not go well together with VR cameras’ low-sensitivity sensors, unless you really nail the lighting;

  • With each frame being shorter, it becomes harder to synchronize the cameras between each other to ensure consistent stitching

Are higher frame rates that important in VR?

So, with so many challenges, is it even worth it to shoot VR content in higher frame rates? After reviewing our footage, we know for sure that it is.

  • In VR, it’s common to have 60-120 fps targets for both content and devices. So, if cinematography part underdeliveres - this would certainly be easy to spot;

  • Frame switching can be much more noticeable in VR than on a traditional screen. This is because your entire field of view is occupied, and something that’s moving fast around you can travel quite far in 33 milliseconds (which is the time distance between consecutive frames at 30 fps). Something that would only move 1 or 2 pixels on a traditional screen and would look fluid might move by an order of a few inches. You’ll surely notice such a jump.

  • To add insult to injury, traditional filmmaking approaches like framing a shot favorably or cutting into a different camera don’t quite work in VR. So, if your scene has some imperfections like this - you don’t have much choice other than dealing with them, and getting the best out of your footage.

What can be done?

First things first - you need to anticipate such issues, and come prepared. We knew that these issues were possible, so we designed choreography accordingly, and tried to place faster-moving scenes further away from the camera. We weren’t ready to hurt artistic value by changing our ideas too much, but given the choice we always preferred more moderate movement to happen close to the camera. Remember that it’s almost always cheaper to fix something during shooting at location than it is during post-production.


(Qute can go here): it’s almost always cheaper to fix something during shooting at location than it is during post-production.


Obviously, we’re writing this post because following those precautions weren’t enough. We’ve given this a lot of though and have found a few ways to improve the experience. The first thing you should do is to optimize video compression and playback settings for your target platform. This is important even if you don’t have issues with lower-than-desired fps, but note that every dropped frame during playback would further increase the discrepancy between adjacent frames, thus making the frame switch more noticeable. We’re still playing with encoding settings, but just bringing the resolution down along with the frame rate made the playback feel much more fluid.


Still, this only carries you so far, and always caps at the frame rate you originally shot. Wouldn’t it be nice to go beyond what was captured and increase the frame rate even further? Turns out that this is possible. Much like the recently hyped Nvidia’s super resolution tool that can hallucinate extra pixels to make images sharper, video production tools have a few ways to make up new frames based on what you already have. Traditional ones like frame sampling and blending wouldn’t really help here, but the relatively new optical flow one was certainly worth a shot. After all, this is pretty much the only way to increase FPS of your video without speeding up the playback.


Optical flow frame interpolation is based on the same computer vision approach that’s used in most stitching algorithms: AI detects groups of matching pixels (POIs) on different frames, selects the ones that are moving, and tries to shift surrounding areas and objects to place them in an intermediary position between the frames. Ironically, adding fluidity to certain objects like running water would really challenge such algorithm, but we felt it was worth a shot for people that move their limbs around. And it worked pretty well, certainly good enough for us to consider processing some of the scenes with such algorithm before the release.


One last thing that we found very effective is adding some computer-generated effects into the experience. When you see part of your field of view rendered at higher fps, you are less likely to feel like the world around you is stuttering, even if the video part of it is not butter-smooth. We wouldn’t think about doing this on purpose, but as we had already planned to add neural-contingent environmental effects in the experience, we just made sure that those work at high frame rates, and confirmed that video stuttering is less noticeable even when other solutions are not being applied.


Did it work?

After trying a combination of the tools we described, we were able to bring perceivable stuttering levels way down, and we’re confident you wouldn’t see it in the final experience, unless you  read this post. Our team can’t wait to share the resulting experience with you, and we hope that nothing will compromise your immersion into the world of “Fire Together”.

This week my team and I have modeled out the Forest for our world!
Fresh off the heels of Play-NYC we want to keep the energy and momentum we've received from our demo.
We have merged most 3D models, so this entire world would only be about 12 draw calls or so. It is incredible to go from simple sketches to a 3D world that we can now walk through on the Gear VR and Oculus Go. We still need to work on some of the lighting issues and in the environment.

Started at the:

21534630_10156568151849046_542863037_ojpg concept-forest-02jpg
sketchpng Forest-sketch-2jpg

Now we here:

https://youtu.be/OGZxnFVf1BA ;

Our next plan is to finish the play mechanics of the world with white boxing prototypes while the rest of the forest is being modeled out.

See you Launch Padders Soon!

PlumCantaloupe
Protege

Image: A sketch I have been making of one of our “astrophysics” worlds. This location is a casual university observatory. I also wrote “Zoom H2N” on this picture as a note about how I found out from our most recent launchpad lecture with Eric Chengthat this Zoom is the cheapest way to capture ambisonicsaudio for VR spatial audio recordings.

Introduction

This week has been some about some small styling changes and exploring other worlds. Also lots of thought on how VR can help us better understand all our many different realities and connections to everything else - we are, in the end, all uniquely imperfect probes into what reality is.

 

Past

The team has been hard at work building up the backend and front end with great tools, libraries, and best practices expected of a good web application. We have some great WebVR ideas that are still to be implemented.

 

Present

One of our “stories” that we warm to bring into this socialVR platform has fallen a bit behind schedule; so to allow that story time to breathe I have been exploring another story I had been thinking on earlier in the project so that we will have at least one story to showcase the power of socialVR in learning.

I have always been a fan of pondering how the universe works. During the final years of high school I was determined to go to university to become a physicist like those I had been reading about in books on quantum mechanics - Richard Feynman, Leon Lederman, Brian Greene, Wolfgang Pauli, Carl Sagan etc. My 18th birthday presents from my parents were even advanced calculus and modern physics textbooks! I later found out, after a year of distracted undergraduate math/physics, that a more fitting direction was to explore how technology can better aid my own creative creation. My love for better understanding the universe has not abated though - but I now think of myself more as a huge fan and spectator of the “sport”.

This week I have been sketching some ideas about what a story about how the astrophysics of neutron stars could be introduced to interested learners (like myself back in high school). I am hoping to also bring in how there are many different voices in science contributing to their exploration. Will hopefully have something more concrete on this soon!

In the meantime here are some sketches of the three locations I wish to explore: a casual and small university observatory, a graduate lab meeting, and a neutron star.

And now for something completely different ... Ontological Design

In trying to come up with a better name that better encompasses the impetus of this VR project I had been thinking on “Ontological Design” after a colleague reminded me that a word that describes the interconnectedness of everything is “ontology” - study into the nature of being. Ontological design is a concept I had ran across a couple of years ago that has fascinated me to no end, Its basic premise is that as we (humans) design tools, environments, and new ways of thinking these tools, environments, and concepts in turn design us back - for example, think of how carrying smartphones in our pockets has changed how we communicate and digest information. How is the inevitable ubiquity of VR/AR technologies going to change us further? Now think on who is designing these technologies? How do we make sure we design a future way of thinking that encompasses more diverse types of people and perspectives? This is the kind of stuff that keeps me up at night!

Read about Ontological Design here in much greater detail by Anne-Marie Willis: http://www.academia.edu/888457/Ontological_designing

And here is a great introductory video by Jason Silva during his Shots of Awe series:

https://www.youtube.com/watch?v=aigR2UU4R20

Future

I am going to focus on creating greater agency within these virtual worlds through embodied interactions, with a plan to further explore more non-vocal communication methods.

Kaelfel
Explorer

Petal Colouring, Proposal, Part-time



August 3-10



Feel free to skip to the conclusion.



I’ve added a Petal colouring mechanic where the players drain colours from berries and place the colour value to a colourless flower. I am still tweaking which inputs feel more natural with draining and filling.



I also asked for some friends in Toronto on how they’ve written their proposal and budget. I am still learning this part of the process since it’s my first time considering the budget of a project, as well as planning all the working parts of the project. But it also feels nice to be learning all this aspect and having control over the overall project. I can quickly communicate to myself which part is working and which part isn’t. Erin, Neilda, and Troy, their discussion on their proposal and budget also have given off a lot of things to consider when writing the proposal pitch and budget.



This week is the last week I worked part-time on this project, the following three weeks I can work on it more. I’ve realized a lot of things about working part-time. I’ve worked at a children’s summer camp, it was exhausting but it may have also given me some inspiration for the concept, but I’m glad it’s done. What I find the hardest part of working part-time is when I’m in the zone when working on something I have to actively stop myself in order to reserve energy for work. I did expect energy and time would be a problem when working part-time but I did not realize that I have to tell myself when I’m feeling motivated to do more work to stop at a certain time.



 



CONCLUSION: Implemented a colouring petals mechanic in the game. Read through other people’s proposal and budget planning to get a better estimation and idea of my project. What I find the hardest part with working part-time is that I have to stop myself when I’m in the flow of working on the design/programming/art so that I can save energy for my other work.

renwang
Explorer

Slow progress this week as I was in a deadline for my day job. Planning to move back to the city in two weeks and cut commute time down from hours to minutes. 

francis_chen_58
Explorer
I finally got a chance to place my Quill assets for Havana 2046, and tested them out on Oculus Go!

https://youtu.be/4tEqHDdStEk

Due to the poly-heavy nature of Quill, there's heavy frame rate dropping. 

For the prototype, I'm thinking of shifting the experience into a 360 animated video (i.e. as a teaser trailer), but to continue making the experience interactive in the long term. I've seen shader programming / Substance Designer prototypes of watercolor-like textures - but due to time, I rather commit to those resources after hearing back from financing. 

As we hear back from financing from Oculus Launch Pad in November (still a long ways from now!), I'm hesitant on committing development costs until people are drawn to the narrative. The good news is that I've used Unity's Cinemachine in the past for CG content, so the transition would be seamless for me. 

------------------------------------------------------

y7jhdx4cq3a1.jpg
(credits to Marlon Fuentes)

Regarding our Mexico City VR experience with Marlon Fuentes, we've gotten some very awesome 3d models done, including an alebrije, a 3D subway station of the Mexico City, and some initial low poly characters we are planning to use. We're also integrating some Mexican contemporary fashion design (i.e. Carla Fernandez's work), but with a very low-poly, geometric vibe!

kmb60zv68cxl.jpg
This week, in addition to finalizing of the 3D models for our prototype, we're setting up the VR experience to include placeholders for dialogue, grab/pick up/action interactions, and music interactions - specifically for the Oculus Go. Since there's limited documentation on 3 DOF interactions (our favorite being Virtual Virtual Reality), it's the wild west for experimentation! 

Jarrod84
Protege
Jarrod J Anderson
OLP18: Week 7
Project Title: Ghosted

We shingled my house this week, it took a lot longer than I thought, but it's done.  I got 2 good days of work in this week.  I focused on modeling the ghost traps for the ai, and started programming the living realm level.  I was able to set up a few basic scare traps that don't have much interaction from the ai.  Pretty much the ai approaches the trap when it's set, then points are added to their scared meter.
9vjfzyqc62um.png

  I'm planning to make the ai react differently for different amounts of scare points that are added.  I have to do some work for Fishing Lake this week, but I plan on adding some more traps, as well as the mechanics for scaring the ai by throwing glasses, pictures and stuff.  We also found a few more people to playtest our game.  We're also looking into local computer arcades that might be able to facilitate a playtest.  The models we're using for the ai are just characters from our Parking Lot Jousting game which is currently on hold.zt87c4px41sc.pngThe house we are using is 3 bedrooms and we are only planning to have 2 ai people to scare out of the house, so we're turning the 3rd bedroom into a vr room.  We'll have different toys that look like collectables in this room that the player will be able to interact with. a2ksdwekdkg4.png
The house is coming together though.  I just need to mess around with the lighting, and I think we'll add in some sounds to add to the scariness/weirdness.  We'll probably also try to do a couple of finished character models with UV maps.  We're still focusing on mechanics and modelling, but I think we can hit the deadline with what I wanted to show.  I also started modelling the character that will be the players guide/narrator.

NeildaLikeZelda
Protege

Neilda Pacquing
OLP18: Week 7
Project Title: EmpowHER VR
Type: VR Experience
Genre: Social Impact, Training


When I ran the game last week, I noticed that it was running at 43 FPS. AKA...no bueno! We’ll be working on finding ways to optimize and increase this number to at least 90. I already spoke to my 3D artist to help, considering some of the environments he made that are in the app are quite heavy. We also haven’t baked any lights, so I know that would help out. I’m sure there are other things we can do as well, so we’ll be exploring them. The biggest win though was turning in the mid-point check-in. Other than that, I’ve been working on another VR app that is also related to social impact, but I had a hiccup when I lost my developer and had to interview a few to replace them. It set me back a few days, but moving on!


Cheers,

Neilda
Oculus ID: NeildaLikeZelda
Instagram/Twitter: @NeildaLikeZelda
Email: neilda.pacquing@gmail.com
Website: neilda.com

Jarrod84
Protege
My post seems to have disappeared.  I posted it, added in a picture then saved it, and the post went totally white as it saved.  I think it saved a blank version of my post so I'll redo a quick one just in case.  I had to roof my house most of this week, but I got in a couple good days of work.  I modeled some ghost traps, and coded the ai to approach and react to them when they are set.  We recycled models from a game that's on hold right now, so these aren't the final characters.  These traps aren't very interactive with the Ai, but I plan to build some with spiders that the ai will follow with its head, and others were the ai is chased by the monster.
uac3iw7npmrr.pngh1yw5495qlxg.png
I also started modelling the character that will guide the player through the game.  I'm planning on drawing up a skin for him this week.
qbv6tgzdlm55.jpg
The house is coming along nicely though.  We should have a lot more modeled for it this week if everything goes well.  I need to play around with the lighting as well to give a more uneasy feel.  I'm also trying to think of what sounds would be really important to the experience, to add into the prototype.  We're also trying to come up with creative ways to scare the ai, that aren't just heads popping out at you.  We have a few ideas that we hope to have added in before we playtest.
8bzs8twysv4j.pngyscowe7c5l2z.png

thegladscientis
Explorer
Amazing work this week. 

Once everything was onboarded for Kris, he jumped right into the repo and began meeting our next few dev goals (mostly working in the echolocation framework). 

We sketched out the levels and how the narrative progresses throughout, and worked on sculpting them in Tilt Brush as prototypes (GIFs to follow next update). The character development continues and things are getting deep in the worldbuilding (very exciting)!