08-21-2018 06:43 PM
08-27-2018 09:54 AM
Last week I focused on content research and optimizing performance. I continued researching how various types of sources articulate and illustrate sound concepts. I dug into Unity profilers to determine strategies for optimizing the experience. I also searched for and integrated additional resources for ambient sound. This week I plan on continuing that work and integrating more explanatory material.
08-27-2018 09:57 AM
08-27-2018 10:04 AM
Blog post #8
This week we have our concept artist drew a few layouts for our Chinese Kitchen. It's not a finished work yet regarding the color tones and the finish look. We realized it is related UX a lot. We have reached out to Ryan from OLP17 to discussed it more.
This week our developer also work on our first main interaction and realized there are a few limitations we need to overcome. We may choose a few steps instead of completing the whole experience since some parts are tricky. We will continue to work on this part this week.
08-27-2018 10:08 AM
#MURDER- Week 8
By: Nir Netzer
This week was intense, but we got a lot done. We are filming this Thursday and there was a lot to get done. We had to find a location, complete the cast, find all the props and so much more.
Producing this prototype is definitely a financial investment for me. But I am all in and I am grateful for the opportunity. It’s not an easy expense for me, but this is a smart investment. I’ve been doing VR for about 3 years now. This is where my eggs are and that’s my path of creation and career. I will spend about $2k only for the production (and will do the post by myself), and that’s with a lot friend’s help and my wonderful team. Me, Azra and Cherryl worked our asses off this week, and we got some awesome stuff done.
For the location, it was kinda hard to find a fancy house for less than $500 a day. I already have so many expenses on the actors, props, insurance, etc. I couldn’t bite the bullet on this one. So at a moment of despair, I got a great idea. 2 years ago I produced a 360 marketing video for a luxury real estate company in Beverly Hills. I approached the same company and offered to produce another marketing video for them for free, if they will let us film there for one day, and they agreed right away.
So…
We are going to film the prototype in this lovely 5M property in Beverly Hills-
https://www.thestanleybeverlyhills.com
Here are a few photos as well. So I’m very excited for that.
I also wanted to use the Aggy for this shoot. It’s a remote control moving tripod a friend from the making360 community developed. It’s amazing. He basically connected a remote control to an electric wheelchair base. I am renting it for all of my studio work. BUT, he went to burning man with it, which was a bit of a hit (ps, It’s like half of the city stopped working. It’s crazy).
I really want to add motion to the piece. VR comes to live when the camera moves. It’s apart of my cinematic style and I really want it to be shown in the prototype. So I was re-thinking about it and we decided to get a wheelchair and tie a tripod to it. That’s what I have to work with now and to be honest. I think it will actually work better.
The problem with the Aggy (and all the robotic dollies) is that- A you need to hide but still see where you’re driving you’re the camera. B it will lose connection with the distance. Therefore, I think that a wheelchair can work better. The cameraman is already hiding, below the camera on the chair, and he can better navigate the camera. With today’s tracking software available, I can mask out the bottom easily and seamlessly from a moving shot. MochaVR is really good for that. Shot out to Nathalie who helped extend my license. So if I am positioning the camera above the wheelchair high enough to mask out the chair but low enough so it won’t feel too high, I broke a new ground.
Other than that, we broke down the expenses for the shoot, cast almost everyone, booked a behind the scene photographer, found a place to get police uniforms at an affordable price and broke down the shot list. Here is a link to our shot list:
https://docs.google.com/spreadsheets/d/1-UyfP_3c5Nz0ski3xtv0G5PzgPxZsa05soTiJXtxmbg/edit?usp=sharing
Cherryl is putting together the call sheet, finding the wheelchair and help to finalize the insurance. Azra and I met 3 days in a row for almost the whole day to get things done. Everyone is very excited about the shoot and so do I.
08-27-2018 10:17 AM
This week I explored different ways in which I can connect the individual monologues and conversations together. The individual monologues in essence are a network of 360 videos on demand. The dialogues are the same concept except there is a choice interface required to navigate and select. The choice options include illuminating and darkening options in the ring or having a interface pop up.
When you select a person. They will illuminate with the following two buttons
“Hear my Stonewall Moment”
“Start Me in Conversation”
On the people they can have conversations with it, the button will say
“Start Me in Conversation Now”
I also created the audition form which will capture key demographics, a few written answers, and a audition video. The audition video will ask them to tell in under 1 minute their “Stonewall Moment.” In the written form they are asked to contribute other questions to other people in the LGBTQ community as well as information about who they are and their life experience.
08-27-2018 10:24 AM
08-27-2018 10:25 AM
08-27-2018 10:27 AM
08-27-2018 10:28 AM
08-27-2018 10:32 AM
Title
Trying To Laugh At This
Animation Layers and Blend Trees
Trying To Laugh At This has opponents (“The Fears”). The Fears will fight you. Conversely, if you submit to The Fears, they will tell you a story about what they’re afraid of.
Concept Art of The Fear
I can use shaders to achieve a mysterious, fluid, layered surface to The Fear, but I need an animating mesh to apply it to.
Reference Image for Mesh - Shader FX
The Fears and the characters in their stories are not entirely on rails -- they are part of a simulation that responds to player input. I need an ai to drive their behavior. I also need to animate their behavior, possibly using custom mocap data.
The piece of this puzzle I took on this week was animating one third person player character myself. Several aspects were new to me. Here, I used a blend tree to fade between idle / walk rather than 2 separate states in the animator base layer. With more animation files, a blend tree could be super useful.
You can see how jump feels more lifelike and interesting than locomotion. This is because I’m using physics for jump but not for locomotion.
Here I use physics for player movement.
My next change was about layers. See how the animation halts when jump is introduced? I was curious if it’d feel better to layer jump with locomotion. Here I used additive/override animation layers and layer masks to apply jump and punch on top of locomotion. The wonkiness can be tuned out. I think this feels more lifelike (disabled double jump is a bug though… :/)