cancel
Showing results for 
Search instead for 
Did you mean: 

Week 4: Oculus Launch Pad (Due July 16 Midnight)

Anonymous
Not applicable
Let's see how we finished in Month ! Share your updates below.
65 REPLIES 65

chinniechinchil
Protege
Tania Pavlisak - EnlivenVR Week 4:

This week I started the production phase of this project by creating a series of different levels to mimic the flow of the experience, starting from Main Menu up to lobby selection. I spent some time watching Unity live stream tutorial video for how to create, save and load persistent data, since I will need to implement similar thing to record users preferences, like their favorite animal, things that made them uncomfortable, etc.

During this time, there were some level design changes as a result of rapid prototyping and user testing. The most constrained setting, which was the laying down position, had a very small range of viewing. During several testing on the previous room level, I noticed that it was hard to access the menu placed on wall to the side. I really had to strain my neck to be able to select the menu items and it was really uncomfortable. This needed to change. I also realize the lobby level can be used as a relaxation place as well. Also I started to design around adding DLC contents to the project.

This week was super busy in term of professional meetups in the evening, so I had to cut some development time.

On Tuesday, Unity 2017 was released. After backing up the project, I decided to upgrade. To my surprised, I had almost no issue. Usually when I upgrade to a newer Unity version, a lot of scripts and sometimes prefabs would break. I was expecting tons of errors, but so far there were only a couple issues.

The first one was in the OVROverlay.cs script from Oculus Utilities 1.16.0 beta. The method Cubemap.CreateExternalTexture used 4 variables, but somehow the script used 6 variables instead. Once I fixed that particular line, the error disappeared.

The second issue came from a custom outline shader that I used before the upgrade. Somehow after upgrading, it added dark blue tints to all the outline materials. Since I didn't know much about writing custom shader from scratch, I had to look for another. I found three different scripts online, tested each of them, and found one that a lot easier to implement than the old one.

In the evening, I went to an Indie Game Developer social meetup, which was held monthly. I met several other VR content creators there.

On Wednesday, I created the different levels, played around with different UI elements and changed design several times to ensure user comfort. Things were good. In the evening, I went to a local Hololens meetup. I wasn't planning to go, but then I heard they would demonstrate Microsoft new Acer VR HMD there. I glad I went though, since the talks were really interesting. The first speaker, Sean Ong, shared his experience in creating virtual apartment tour for a company in Dubai. The second speaker, Thomas Wester, shared his experience in capturing dancing motion into VR and AR experience in his team project, "Heroes - a Duet in Mixed Reality", which was created as 2D film, 360 video (available on GearVR) and for the Hololens. I really like this talk, since it was the first time I have seen Hololens not used for business purpose. You can view these talks here.

My impression on Acer HMD... It was very light, even lighter than GearVR + phone. However the demo used a PC with older graphic card, so graphic-wise it was similar to PSVR. In term of room-scale, it felt like Oculus Rift, for seating experience. We can move a little bit, but there was not much room to walk around, unlike HTC Vive. They didn't have the VR controller yet, so instead we had to use Xbox controller. I personally dislike VR experience using game controller. At the moment I was unimpressed with the HMD. I will have to try this device again with better PC and the hand controllers in the future.

On Thursday, I did some code and assets cleaned up. I then noticed that the new outline shader was acting strange. It would work fine for a bit then it wouldn't work at all. I wasn't able to touch more on this since I had to go to Seattle Unity user group meetup and learned about a real-time geospatial plugin called Mapbox. Having to drop was I was doing while it was still unresolved actually gave me a lot of anxiety. Those who befriend me of Facebook probably saw me complaining about it. Thankfully the content of the meetup talk was really interesting and I also got to meet some old friends that I haven't met in a while. My anxiety was reduced a lot afterward.

The next day, instead of jumping straight into figuring out the issue, I started the day with me-time, taking extra time with hot shower, made delicious breakfast (usually I forgot to eat breakfast) and organized the house a bit so it didn't look like a typhoon just passed through. When working solo, it is really easy to get burned out, and I noticed the anxiety might be the first sign. So after I felt more relaxed, I opened up the project and tried to figure out what causing the shader issue. After several testing, I noticed that during run-time, there was extra camera under OVRCameraRig. In my project, I modified CenterEyeAnchor game object by adding more children for UI and gaze interaction and their supporting scripts.

4zt0knxc0xs6.png
During run-time, usually CenterEyeAnchor was placed under LeftEyeAnchor automatically.  But during my testing, my custom CenterEyeAnchor was placed under LeftEyeAnchor as usual, but then extra CenterEyeAnchor sub-object was created with a camera attached to it and it prevented the new shader from displaying correctly.
 
vsb9s7q8ayqt.png

After I modified the OVRCameraRig.cs script and disable this extra camera, everything worked again, hooray. Let's hope this won't create a new wonky behavior in the future.

byupfpkdudpy.png
     
In the evening, I went to an event hosted by TPCast, which was a device that transform HTC Vive into a wireless HMD. I was skeptic about it before, thinking it would have latency issue, and the battery pack would be uncomfortable. However I had a good experience with it. It didn't have any latency, and I forgot about the battery pack. It felt a lot more comfortable than having the long heavy cable, that's for sure.

[Now, for a bit of rambling.]
This week I met a couple people that made me think more about VR industry and where we are heading. Both actually happened before and during TPCast event. 

We arrived about 45 mins early for TPCast. The lobby had some nice sofa, so I let my husband tried the latest build of EnlivenVR while killing some time. We also met with Ryan Smith, another 2017 OLP member and chatted for a bit. Then an older gentleman approached us. He looked somewhat disturbed. He saw my husband using GearVR a while back, and wanted to share his concern. The gentleman was a senior composer working in movie industry, and had some people told him to look into VR. This event was his first time going to any VR event, and he noticed all the demo were on 'violent' games. Then as he talked to one of the organizers, he was told that that was what VR all about, and that really upset him. I was really surprised, since I have seen many interesting projects, games or non-games ones. I told him there were a lot more than violence in VR, and that I was in a middle of making a relaxation VR experience. He seemed happy to hear that. However he seemed disinterested from looking more into VR based on this first experience, which made me pretty sad. The gentleman left before the event even started.

As we made our way to the event room, one of the organizers was curious about what happened. We explained to him, and he seemed shocked as well. It turned out they were showing Space Pirate Trainer, and a bow/arrow game, which to most gamers were considered non-violent. I was expecting to see something like Raw Data, Arizona Sunshine or other zombie survival shooter games instead. The organizers then tried to catch up to the gentleman to talk to him, but he was long gone.

At the event, there were a total of 6 ladies: four attendees and two organizers. I chatted with these ladies. Like most VR events, the attendees were mostly men. One of the lady, just like me, has been working in tech for a while so she was fine. But another lady, who was there with her mom, was very new to VR. Just like the gentleman from before, this was also her first VR event. She was a student from business school, curious about this new technology but felt really intimidated for being a minority. We talked for a while, and I shared my experience that although it were very common to be minority in this kind of tech events, the ladies in Seattle and all over the world are trying to make VR/AR industry more inclusive to women and other minorities. She seemed relieved to hear that, and interested to come to more local meetups.

When it was my turn to try the device, I asked the organizer if I could try different app, a non-game one. They had Tilt Brush installed, so I went with that. I had a good experience being wireless, able to move around and draw from different angles without the need to teleport around or worry about stepping on tangled cable. When my turn was over, the two female staffs came to me, shared how they never try Tilt Brush before and now they were really interested of what else VR were capable of aside from gaming.

In the end of the day, I was left pondering. As a gamer, am I desensitized to violent contents? I don't feel disturbed for shooting zombies or slashing monsters. For me, they're no different than the fruits we slice in Fruit Ninja, just some objects to interact with. But for those who are not familiar, do we look like violent people for enjoying these kind of games? As content creators, what considerations should I put when creating contents, to make people like that gentleman not to stay away from VR?   
[End of rambling]

Back to the project talk. To do list for next week:
- Create and test saving/loading custom data.
- Use custom data to drive object generation in room and garden level.
- New menu design that won't hurt my neck to much.

See you next week! And don't forget to take a break and treat yourself once in a while. It really helps.

Week 4: Putting it All Together

This week I finally got to put the player into the first level. My idea for this level is to make the player wake up in a room and find a way out. This will be an introduction to the controls. There will also be a puzzle to get out of the room, this will show a general understanding of the controls by the player. The player need to mater some basic commands in order to continue within the game.

Screen Shot 2017-07-16 at 55252 PMpng

This warehouse section is the part of the game where player will be waking up and start the experience. The player moves around pretty comfortably and the scene looks great, but most of the shaders within this scene are Unity's standard shader and that is not good for mobile VR. Currently our drawcalls are around 40 for this scene but some areas the drawcalls are nearly 70. This needs to be fixed before we move on within the next level. However, I have hope we can fixed this soon. There are some prototypes I've been working on that only has 9 drawcalls if I can figure out how that is being done my hope is that Ernest and I can use that knowledge within the next scenes.

Screen Shot 2017-07-16 at 55304 PMpngHowever for now this is excellent progress and I cannot wait to continue on Museum Multiverse.  What I have to do next is get my controller scripts working with the character this has been harder than I thought but I will get this working and it will be great when it I do!

 

 

CRAZY THOUGHT

Early today, I started to wonder how Museum Multiverse would play if experienced from a first person camera. While I know that first person platformers are not the most praised of game genres, I thought about the focus on art and how players might be able to better appreciate the art if viewed from a first person perspective.

Screen Shot 2017-07-14 at 121855 AMpng

We decided that over the next couple of weeks we’ll create and experiment inside a small mock scene in Unity, focusing more on utilizing the Gear VR controller and manipulating objects by picking them up and turning them around. What if we could pick up a piece of art, pull it in and out, turn it around, and fully appreciate the detail in each piece? Then we can intersperse sections of fast-paced third person platforming action with quieter times of first person appreciation and exploration of art. We don’t have any of the art assets in this room just yet, so we’ll be using simple geometric shapes and common room items to get the feel and controller first.

Screen Shot 2017-07-16 at 70205 PMpng

Screen Shot 2017-07-16 at 70221 PMpng

I’ll continue to work on my third person platforming section, but I can’t rest until I throughly test this first person idea.

elisabethdeklee
Explorer
Creator: Elisabeth de Kleer
Project: Communities in Peril

Ideas have been coming fast and furiously, but I need to slow down, focus, and weigh what I can feasibly hope to complete by September.

In the process of researching and putting together a list of communities and community spaces to feature in the project, I was able to get access to a couple unique groups in China. One is a gallery space in an urban village in Shenzhen (similar to the one I posted about in my first blog entry) that--to my knowledge--has never been documented. Urban villages are densely packed, constantly evolving urban spaces inhabited by migrants, factory workers, and university students who hope to rise up in the world. The gallery space, which is run by an anthropologist, is used as an installation where villagers can decorate the walls with their hopes and dreams for the future. The tiny space is routinely painted over to make space for a new artist's expressions of longing. It's an ephemeral exhibit in an inherently ephemeral community. 

The second is a hakka walled village in Southern China. Hakka villages (or Tolous) are large multi-family communal living structures that are designed to be easily defensible. I've been in touch with a woman who's grandparents lived in one of these villages and has offered me a place to stay there and access to the community. Most of the community members are in their 80's (the young people have migrated to the big cities) and still speak a local dialect. There is a nearby school teacher, however, who speaks both English and the local dialect, who has offered to serve as a translator. While it's unlikely the circular structures will be destroyed any time soon (they are considered heritage sites), the communities that inhabit them are dwindling, and they will soon be nothing more than museums. Now is the right time to capture the way of life in the Hakka villages, and 360 seems like the best way to do it.
rfsbj5t11vu0.jpgedt2htf632pb.jpg

Here are some of the practical concerns on my mind this week:

Will I be able to document these spaces and do them justice with the resources that I have available? Luckily I have access to a FANTASTIC 3D 360 scanner that captures the layout of a space in such a way that you can move and explore it in VR. However, I want to couple this with 360 video, and I'm concerned that the Gear360 camera--while I LOVE playing with it on personal projects--isn't professional enough to stand up against other 360 ethnographic films. If I had access to a better rig, I'd be on a plane right now documenting these places. As it is, I think my time might be better spent gathering the best possible resources so I can pull this off as well as possible. The downside to quality is that I'm not sure I can pull it off by September.

I've also been playing with the idea of what a 'lobby' might look like... and by that, I mean what the viewer will see when they enter the experience. The ultimate big-picture goal would be to create a format that could be interated  hundreds of times by communities all across the globe. As a viewer, it would be cool to step into a globe and see nodes for the communities that have been documented. By clicking on the node, the viewer is transported to that particular community's experience. It also got me wondering: is there any way to sort YouTube 360 videos by location? Instead of populating the globe by myself, perhaps there's also a way to link to pre-existing videos showcasing different spaces around the world. Hmmm...

Going one step further down this rabbit hole of organization and data visualization, it would be neat to see the videos sorted by keywords, so you can how the same human experience differs based on location. For example, how does the experience of drinking tea differ in Morocco than, say, in Japan? Or, how does a classroom in the United States compare to one in, say, Uganda. By seeing 360 videos and experiences presented thematically, it's easier to pinpoint the cultural differences, but also to see the ways in which we are fundamentally similar.

Next steps: making these ideas a reality...




glittervelocity
Honored Guest
I did more preproudction work on 'house in the corn' this week. 

ART/LEVEL: 

I gathered more reference for the style of house I want to build. I also whiteboxed out this space and re-drew my map. Ultimately, I plan on having the player move through 5 rooms; the foyer, a game room (think pool), a gallery, a dining room and finally a courtyard. Still shooting for a mid-century modern house. I found some unity assets I may use for props. 

DESIGN: 

I have identified the gameplay loop for the launchpad demo for September. When you walk into the house, you see the courtyard door. As a player I'm hoping I can effectively message to you that this is going to be your objective; to get into the courtyard. The rooms will be full of glass as you travel around, so you will always be able to see into the courtyard. When you investigate the door to the cortyard, you see 3 Key Card slots that can be used to open the door. As you walk around the house, you find the 3 key cards needed to open the door. There's a key card in the Game Room, Gallery and dining room. As you explore you also pick up clues and discover more of the narrative. Once you are in the courtyard, there is a reality-shifting effect and the demo ends. 

TECHNICAL: 

I have doors opening and shutting. I also identified that in the final game pass I'm going to need to license 3DsMAX to build out the space the way I want (I'm very familiar with MAX as we use it at work). Starting the week after next (end of July) I plan on getting a free trial of MAX to flesh out the more final art vs just whitebox. 

NARRITIVE: 

I have identified more of the story; I cut it down a bit and simplifed it, so it only focuses on 2 characters and how they interact. It still hits the themes I wanted to hit. The owner of the house was an ethenol barron (Ms Windsor) who built up an empire around corn-based energy; she personifies a lot of traits of rual americans, and is neither really good nor evil. Her business partner is a technical 'boy tech wiz' from california who dreams of moving humanity under the earth to protect humans from global warming; but he also has a great plan to create clean energy for everyone with a device that uses gravity to generate electricity. More or less, everything goes wrong and you explore the house to find out what happened. 

nicole_babos
Protege
This week we worked on several gameplay scripts that tied the functionality of our proposed gameplay into the scene. These scripts manage the actors in the scene as well as lighting and other mechanisms. They also handle the task of generating game persistence and procedurally generated elements of player progression, such as the numbers of challenges and other increments in difficulty between levels.

We tested our scripts from the Start Screen of our game through our Test Screen. Many of these scripts were very neat to see in action, because they create visually identifiable changes in the scene that signify new requirements on the player or behavior changes in the actors. For example, our actors fidget as they grow anxious, and if their anxiety exceeds predetermined thresholds, the lights begin to flicker and the curtains roll down, creating a spooky atmosphere until our actors’ anxiety is decreased or the lights all go out. When the lights come back on and the curtains roll up, something is different and the tension is reduced.

wocc2e4httg4.gif

Additionally, I started turning some of our assets into Prefabs in Unity, so that we can reuse them in a different scenes. Now we can build out a traincar using a prefab and easily populate it with passengers using scripts. Additionally, I finished the faces for our passengers and these will be randomized upon start. As the anxiety of the actors increases, their faces will reflect their moods as well.

Finally, I created textures for the six food items in our game. These include a steak, a fish, a croissant, a slice of pie, a chicken thigh, and a huge ham. These will be used in the dining car. 

38zd78o3auyf.png

Next week I want to focus on the Main Menu and UI.
Nicole
www.fancybirdstudios.com

prettydarke
Protege
Definitely experiencing some hurdles this week. The plan was to have an amazing artist and work on concept art, then use the artwork to help pitch the idea to other creative folks and expand the team. I'm still in the dark as to what's going on, but the artist is MIA. These things happen of course, but as she's a close friend of mine, I'm mostly worried about her. She's not the flaky, incommunicative type.

I'm pressing on without her at the moment, but keeping my fingers crossed nonetheless. So this week we started prototyping just with primitives. Also picked out a color palette which is looking kinda nice. Getting the scene mocked up, with the palette included was super helpful in fleshing out some more aesthetics. The story started to unfold for me as I saw this room come together. The programmer still doesn't have the hardware though, so I'm shipping it out to them next week. 

The next couple of weeks are going to be pretty nuts. I'm heading out to LA to teach, so I'll be working two jobs for a bit, with not much time for dev. I'm thinking 1. I can try to visit the artist while I'm in LA. 2. I can direct the programmer to work on gestures and make sure things are running smoothly for VR. 3. I can storyboard the experience from start to finish. That should put us in good shape come August.

skatekenn
Explorer
This week I spent some time going through those Blender tutorials I've mentioned before! I'm getting more comfortable with the software and want to spend more time putting some kind of model together (even if it's extremely rough) for my project in the next week. A general goal for the next couple weeks is to have some actual content to show off here, even if it's something small! This hasn't been a super interesting update, but I'm looking forward to having more to talk about soon!

murascotti
Explorer

The original idea for my Launchpad project was inspired by using the mobile app Super Better.  The goal was to create a VR room scale experience on Rift that allowed the player to explore their depression by talking to a friendly AI character and to share what’s going on in the player’s life.  

As a former video editor at a post production house, my days were filled with long hours in front of a screen on weekends in order to meet tight deadlines.  This meant that in crunch mode, I was sacrificing relationships with friends and family to finish assignments on weekends, and I was forgetting to take care of my physical and mental health.  

My belief that a person can apply the lessons learned in VR to reality to cope with mental health drove me to learn more about the science behind depression, automated speech recognition, and natural language processing.

For starters, I learned that reward pathways in the brain light up when people play games, which means that games have the potential to build confidence and resilience.  Additionally, IBM offers the Watson SDK for Unity (the same Watson that beat Ken Jennings on Jeopardy!) I learned the definitions of intents and entities.  An Intent (denoted with a hashtag) is anything that defines the user’s goal (e.g. order_food, turnon_speakers, play_music, etc.) and entities are the types of objects that make up the user’s intent, denoted with an @ symbol (e.g. restaurantNames, devices, musicgenre, etc.)  

While I was able to successfully implement Watson services into a basic Unity project and have it convert my speech to text via the built-in microphone input on my laptop, ultimately, the problem of creating an AI character that can respond to open-ended questions, such as how you are feeling, was pretty complicated.  I began by writing out a list of potential intents to describe how a player could interact with the character, where each intent was one of six key questions (e.g. Who, What, Where, When, Why, How).  The entities were specific things that the AI character could identify, like people (e.g “Me”, “You,”, “I”), places, things, feelings.  To test my method, I designed a dialogue flow with Watson by assigning it responses to my questions and/or statements.  

When Watson didn’t understand what I typed to it (i.e. it couldn’t match any of the content in the sentence with an Intent), I’d have Watson respond to me, “I must have wax in my ears. Could you say that again in another way?”  When I was satisfied with a basic interaction, I planned to create animations tied to each of Watson’s responses and a dialogue wav file to go along with each intent and entity match; the Dialogue Flow system proved extremely helpful in visually following how Watson understands the conversation.

Ultimately, since I have more experience using Unreal Engine than in Unity (the Watson SDK is currently only available for Unity), I decided to pivot to a story driven experience I had in mind using Unreal Engine and Gear VR for my Launchpad project.

msourada
Explorer
AthenaeumVR Week Four

This week = ideation/creation phase. Visiting ground zero for inspiration, and direct impact on how we design.

The Met.
MOMA.
The Whitney.
NYC (just, in general).

What is our platform (in its first phase) going to look like in our first steps to disrupt the museum paradigm.
Or vision is that we do 360 capture of the Albrights main courtyard.
Image result for albright knox art gallery
Image result for albright knox art gallery

Image result for albright knox art gallery
I grew up here. Every weekend as a child. And as a pre-teen, and then as a college student (teaching those behind me). I know every vein of those marble pillars. Almost every angle the sun hits as it illuminate the sculptures du jour in the courtyard and just the structure itself. This is the seed for the vision (this is our client) and the start point of the design from whence we 'make it all work'.
Transposing 'this' into VR. And creating the pathways for artists to funnel new, original, and only accessible by VR works into the portal, is the challenge we are grappling with every day.
In truth, this week was also spent in discussions with advisors(some involved in day-to-day implementations of VR/AR/etc at existing museums/galleries... so getting additional inputs on the challenges we haven't forseen? Priceless. some involved in other facets, and now onboard to help us vet the artistic/programming talent we bring onboard to help realize our vision).

It is interesting to  see the approaches to each of the ventures in progress. In some sense, worried that our approach is so different from everyone else (seeing all this awesome 360 movie capture, character design, and story arc).... we have that all in our respective 'heads', but working from the other direction. Getting all the business/licensing/who will do what/milestone/design.. in essence working the other end of the pipeline through.
It's a process.
Looking forward to the challenges that next week brings.

Thudpucker
Honored Guest

What a week! The good news is that after several weeks of research, testing, and gray-boxing different concepts, I think I'm narrowing down on something. I really feel like I've been bouncing around with a lot of good ideas, but no one strong focus.

When we last left off, I was chatting with someone I met about medical applications of VR. I did a lot of research in that, including taking a meeting with a consultant who advises medical device companies on how to apply technology to their therapies. I learned a *ton* from this meeting. The end result though: Don't get into medical therapies unless you're ready to commit 100% to that. It's a long road in terms of determining the right partners, fighting over who owns the technology, fighting over who owns the data, and also determining who would ultimately fund and purchase products like that. So while there is abundant opportunity in this, I'm going to leave that vertical slice along for right now.

While that didn't pan out, some other discussions I've been having really did. I've been talking with an acquaintance about presence in VR and showed him the King Tower demo that I did earlier this year. He was very interested in getting something mocked up for training purposes. I'm happy to announce that training will be the focus of my Launch Pad project!

I'm looking at a very simple customer service example based in a restaurant. If you were the host or hostess at a popular breakfast place and saw that a guest you had seated 5-7 minutes ago still hasn't been helped, what would you do?  I'll be leveraging a lot of the research I've done in voice and conversation to create this scenario.

I think there's a lot of wonderful future applicability of this. The training applications alone are too numerous to mention. I'm also thinking of that somewhat recent TV show called "What would you do?" in which hidden cameras captured people's reactions to discrimination or other poor social behavior in restaurant settings. There's really a lot that can be done with this. What I'm focusing on now is just an initial part of that.

I'm looking forward to moving forward with this. This is going to give me the focus I need to bring something really wonderful to VR!