07-12-2017 08:45 AM
07-16-2017 03:41 PM
07-16-2017 04:10 PM
Week 4: Putting it All Together
This week I finally got to put the player into the first level. My idea for this level is to make the player wake up in a room and find a way out. This will be an introduction to the controls. There will also be a puzzle to get out of the room, this will show a general understanding of the controls by the player. The player need to mater some basic commands in order to continue within the game.
This warehouse section is the part of the game where player will be waking up and start the experience. The player moves around pretty comfortably and the scene looks great, but most of the shaders within this scene are Unity's standard shader and that is not good for mobile VR. Currently our drawcalls are around 40 for this scene but some areas the drawcalls are nearly 70. This needs to be fixed before we move on within the next level. However, I have hope we can fixed this soon. There are some prototypes I've been working on that only has 9 drawcalls if I can figure out how that is being done my hope is that Ernest and I can use that knowledge within the next scenes.
However for now this is excellent progress and I cannot wait to continue on Museum Multiverse. What I have to do next is get my controller scripts working with the character this has been harder than I thought but I will get this working and it will be great when it I do!
CRAZY THOUGHT
Early today, I started to wonder how Museum Multiverse would play if experienced from a first person camera. While I know that first person platformers are not the most praised of game genres, I thought about the focus on art and how players might be able to better appreciate the art if viewed from a first person perspective.
We decided that over the next couple of weeks we’ll create and experiment inside a small mock scene in Unity, focusing more on utilizing the Gear VR controller and manipulating objects by picking them up and turning them around. What if we could pick up a piece of art, pull it in and out, turn it around, and fully appreciate the detail in each piece? Then we can intersperse sections of fast-paced third person platforming action with quieter times of first person appreciation and exploration of art. We don’t have any of the art assets in this room just yet, so we’ll be using simple geometric shapes and common room items to get the feel and controller first.
I’ll continue to work on my third person platforming section, but I can’t rest until I throughly test this first person idea.
07-16-2017 04:47 PM
07-16-2017 04:54 PM
07-16-2017 05:21 PM
07-16-2017 05:42 PM
07-16-2017 05:50 PM
07-16-2017 06:26 PM
The original idea for my Launchpad project was inspired by using the mobile app Super Better. The goal was to create a VR room scale experience on Rift that allowed the player to explore their depression by talking to a friendly AI character and to share what’s going on in the player’s life.
As a former video editor at a post production house, my days were filled with long hours in front of a screen on weekends in order to meet tight deadlines. This meant that in crunch mode, I was sacrificing relationships with friends and family to finish assignments on weekends, and I was forgetting to take care of my physical and mental health.
My belief that a person can apply the lessons learned in VR to reality to cope with mental health drove me to learn more about the science behind depression, automated speech recognition, and natural language processing.
For starters, I learned that reward pathways in the brain light up when people play games, which means that games have the potential to build confidence and resilience. Additionally, IBM offers the Watson SDK for Unity (the same Watson that beat Ken Jennings on Jeopardy!) I learned the definitions of intents and entities. An Intent (denoted with a hashtag) is anything that defines the user’s goal (e.g. order_food, turnon_speakers, play_music, etc.) and entities are the types of objects that make up the user’s intent, denoted with an @ symbol (e.g. restaurantNames, devices, musicgenre, etc.)
While I was able to successfully implement Watson services into a basic Unity project and have it convert my speech to text via the built-in microphone input on my laptop, ultimately, the problem of creating an AI character that can respond to open-ended questions, such as how you are feeling, was pretty complicated. I began by writing out a list of potential intents to describe how a player could interact with the character, where each intent was one of six key questions (e.g. Who, What, Where, When, Why, How). The entities were specific things that the AI character could identify, like people (e.g “Me”, “You,”, “I”), places, things, feelings. To test my method, I designed a dialogue flow with Watson by assigning it responses to my questions and/or statements.
When Watson didn’t understand what I typed to it (i.e. it couldn’t match any of the content in the sentence with an Intent), I’d have Watson respond to me, “I must have wax in my ears. Could you say that again in another way?” When I was satisfied with a basic interaction, I planned to create animations tied to each of Watson’s responses and a dialogue wav file to go along with each intent and entity match; the Dialogue Flow system proved extremely helpful in visually following how Watson understands the conversation.
Ultimately, since I have more experience using Unreal Engine than in Unity (the Watson SDK is currently only available for Unity), I decided to pivot to a story driven experience I had in mind using Unreal Engine and Gear VR for my Launchpad project.
07-16-2017 06:48 PM
07-16-2017 07:01 PM
What a week! The good news is that after several weeks of research, testing, and gray-boxing different concepts, I think I'm narrowing down on something. I really feel like I've been bouncing around with a lot of good ideas, but no one strong focus.
When we last left off, I was chatting with someone I met about medical applications of VR. I did a lot of research in that, including taking a meeting with a consultant who advises medical device companies on how to apply technology to their therapies. I learned a *ton* from this meeting. The end result though: Don't get into medical therapies unless you're ready to commit 100% to that. It's a long road in terms of determining the right partners, fighting over who owns the technology, fighting over who owns the data, and also determining who would ultimately fund and purchase products like that. So while there is abundant opportunity in this, I'm going to leave that vertical slice along for right now.
While that didn't pan out, some other discussions I've been having really did. I've been talking with an acquaintance about presence in VR and showed him the King Tower demo that I did earlier this year. He was very interested in getting something mocked up for training purposes. I'm happy to announce that training will be the focus of my Launch Pad project!
I'm looking at a very simple customer service example based in a restaurant. If you were the host or hostess at a popular breakfast place and saw that a guest you had seated 5-7 minutes ago still hasn't been helped, what would you do? I'll be leveraging a lot of the research I've done in voice and conversation to create this scenario.
I think there's a lot of wonderful future applicability of this. The training applications alone are too numerous to mention. I'm also thinking of that somewhat recent TV show called "What would you do?" in which hidden cameras captured people's reactions to discrimination or other poor social behavior in restaurant settings. There's really a lot that can be done with this. What I'm focusing on now is just an initial part of that.
I'm looking forward to moving forward with this. This is going to give me the focus I need to bring something really wonderful to VR!