cancel
Showing results for 
Search instead for 
Did you mean: 

Oculus Launchpad Dev Blog- 10th Entry- DevLog 757.010

KingHedleyiii
Explorer

In case you're seeing one of my posts for the first time, I have a bit of a structure to my posts where I typically deal with 6 areas related to VR noob development, and development of my VR experience called Underdog. The 6 areas are Scripting, Audio, Made with Unity, GitHub, Storytelling/Narrative, and Immersive Experience. It's my belief that if I stay focused on these areas, not only will I learn a great deal, but I'll have a well rounded development for the experience I'm working on building. Today's post I'm leaving out GitHub and Storytelling and have a little extra on the Immersive Experience aspect. I also realized I mislabeled my 5th blog entry (2 posts, same title), so this is actually my 10th entry.

Scripting

https://www.youtube.com/watch?v=xNpCrd9nhmo  -- Raja teaches me how to destroy a Game Object (so my game object post just previous to this one isn’t so useless after all!)


  • Inside the parameters of the the Destroy function (), you can pass 2


  • This was really interesting what Raja did here. The function isn’t all that interesting, but in what he has highlighted, there’s actually a lot of information in the script

  • The code is written like this:   Destroy  (gameObject, 3f);

  • First off, notice that this code is written in the Start function area. Meaning, this Destroy script will only be called when the game Starts, so basically in the opening frame of this particular scene

  • Second, notice that within the (), that gameObject is intentionally left in lowercase. He doesn’t explain why, but he says this is important

  • Finally, the 3f at the end of the Function is actually a unit of measurement for time.

           f = seconds here. Also notice that to complete this line of code there is a semi-colon at the end of the line of code ;

  • Translation of this particular line of code:  Destroy (gameObject, 3f);  In the Start screen, this game object will be destroyed 3 seconds after the opening screen is called.*Make sure your script is attached to a game object in your Hierarchy view before you test this out

  • I’m using C## for Dummies to help round out some of the lessons here, so here’s what they say:

  • “To call a Function, you type its name and then a set of parentheses. Inside the parentheses, you list the items you want to send to the inputs of the Function. The term we use here is pass, as in ‘You pass values to a Function.”

  • Here is Unity’s list of all the event Functions you are capable of producing from within MonoBehavior


Audio

Mobile Audio Processing and Memory Bus Bandwith and Load re: Performance constraints on mobile. Kind of a killer article.

  • Think of mobile SoC (system on chips) hardware as a set of restaurant who coordinate their work to cook food (compute certain tasks) with raw ingredients (video and audio data)

  • They break it down like this: The CPU makes the spaghetti, the GPU makes cupcakes, RAM is the storehouse for raw ingredients, the audio chip makes the lemonade, and the memory bus connects them all together. [And I really hope they break down how they came up with this metaphor]

  • Re: the memory bus, the ingredients are transported between CPU, GPU, audio chip and RAM on moving walkways; these walkways never turn off and never waver in speed, which holds signfigance for 60 FPS

  • 3 fundamental problems w/ the moving walkways that slow down spaghetti and cupcake production:1) the moving walkway speed is too slow, 2) the walkways could be wider 3) inefficient use of slots

  • Few developers consider CPU load, and most don’t realize that processing and computing bottlenecks are on the memory bus, not at the CPU

  • Inefficient audio processing reduces performance of 3D graphics engines, reduces framerate, and increases device temp

  • Better audio software can and will provide more efficient algorithms for audio processing. The major result of which would be more performance per watt by mobile phones

  • To avoid dropping frames, devs should remember to optimize for audio too (the red headed step-child of VR), to get faster image and video processing


Made With Unity https://madewith.unity.com/

Article: Faking a Sentient AI in Event 0.


Framework work why this relevant for Underdog: One of the dreams that I have for Underdog, in fact one of the elements I think will be necessary to make the game successful at dealing with the bullying issue, is to make the game as individualized and specifically accurate to almost mirror the user’s experience of bullying in real life as possible. Not in terms of the intimidation, I’m not trying to traumatize anyone, but what’s the point of a simulator if you don’t *simulate* the actual experience as much as possible? Because of this I feel that one of the elements that needs to be really on point in this game is its AI. That’s why this particular article really resonated with me because the developers really wanted to push the limits of what they felt a game’s AI could be used for.


Here are some of my main takeaways from the article:

  • an experimental student project that made extensive use of a chatbot AI. A narrative-based Siri if you like.

  • How the core mechanic of Event[0] was born, and how certain design choices that we made shaped it to be what it is now: a reverse Turing Test where empathy is the core skill you need to use as a player. -- [this was the phrase that hooked me as I was reading, and I knew I would do a bit of a write-up on the piece]

  • It turned out that most of them [narrative based games] used more or less the same pattern: pre-written dialog, crafted by narrative designers and writers, both for the player and the NPCs the player talks to.

  • Quickly saw that the more choice you got, the more invested you were in your character. TheMass Effect system is especially interesting to us because it made it evident that the exact words your character used meant little, and what mattered most for immersion was making choices. [Woohoo!! Immersion tips!!]

  • [The next element here I’m skipping over but would like to mention is that this team decided AGAINST getting really fancy with lots of bells and whistles and instead chose constraints that emphasized gameplay and immersion]

  • The nature of chatbots is such they can’t possibly reply to everything you say correctly. Their knowledge base is inherently limited, and the word combinations they understand are predefined. The AI in Event[0], just like any other chatbot AI, will occasionally misunderstand your input. When it screws up occasionally, you don’t perceive it as the game itself being broken, but rather as the AI in the game being somewhat glitchy. [This was helpful to know in terms of knowing what are some of the limitations of chatbot AIs]

  • To generate the responses, we use input parameters. There are four of them: [the 2 that were relevant to Underdog were] 1)Player’s input. When you type something into a Kaizen-85 terminal, your input is analyzed for meaning. We have a semantic dictionary of tags each of which contains some words with the same meaning. When you say “glass,” “plate” or “fork,” the AI understands “tableware.” When you say “father” or “nephew,” the AI know that you're talking about “family.” When we find some these tags in a player’s input, we can deduce the meaning of the whole sentence. [I’m thinking this will be similar in Underdog, if you say something like ‘sub-tweet’ or ‘snapstory’, the bot should understand the user is talking about cyberbullying]

  • And 2) Current event. The game is called Event[0] because everything that happens aboard the Nautilus (the spaceship) is an event registered in the AI system, including things triggered by the player’s actions. The AI is aware of the conversation subject at hand through this event system. When you talk to it about the lobby of the Nautilus, it will have more vocabulary related to the lobby. If you change the subject, it will adjust its dictionary accordingly. [As Underdog will be modular, at this phase anyway, the vocabulary the AI uses will be contextual based on where the bullying scenario is taking place: in the gym, in the cafeteria, on the playground, etc. Vocabulary that is specific to each one of those environments should help trigger the AI’s understanding of where the scenario is taking place]


Because it will be necessary to collect a fair amount of information from users about what they may be experiencing when they are being bullied. I mention this because one of the dreams I have for the game, which I’m not even sure is possible, is that users will input aspects of their bullying experience and upload it to the game’s AI which will sift each user bully experience for common themes. Actually, what I’m looking for are what were the solutions that were successful in dealing with a specific bully approach. Right? Like, if one (or 100) kids are bullied by never being picked to play on the basketball team, there have to be some verified ways that kids dealt with that situation successfully, right? So in my mind, I’m thinking an algorithm could be written that would basically sift through all of the successful ways of dealing with a particular bullying situation: being ignored, being physically assaulted, having rumors spread about you, etc. and have the AI in the game spit back 2-3 solutions that could be useful to deal with the situation. Have the AI list, hey, if you’re dealing with, for example, being ignored at school, here is a list of 3 things you could do that other users have employed and found to be successful in dealing with the situation. The game will tell you that you’ll have a 74% chance of success by dealing with the situation using Approach A, a 65% chance of success using Approach B, etc. Which approach would you like to practice? That’s the way it plays out in my head anyway. I have no idea if it’s realistic or not. Where’s the goddam Holodeck when you need one?! I mean, you would obviously have to come up with your own definition of success for each scenario, and even that would be specific to the individual. No definition of success would fit all situations, and I think ‘having the bully leave me alone 100%’ is an unreasonable outcome for all users. So maybe that’s the user input piece that’s participatory- define the outcome that would mean success before you even enter the module.



Immersive Experience

Framework for why this is relevant for Underdog: Immersive experiences aren’t just an expression of true engagement, they’re also pretty exemplary of a sense of wonder. If the experience you’re having as a user is one of a complete giving over of self, why would you want to be, or do, anything else? In my mind, I’m curious if I’ll even need to try super hard with my core group of intended users, i.e. young people on the autism spectrum, in terms of immersive visuals simply because I feel like the novelty of the technology will automatically create a sense of wonder for them. The fact they may actually get something useful out of playing (i.e. how to deal with particular types of bullying) would simply be a byproduct of being in the environment. However, I’ve been thinking lately about what makes particular categories of storytelling more immersive than others? Or wondering if it’s just the fact that certain categories of storytelling simply appeal to certain personality types or life experiences more than others.  

For me, the category of story that was most immersive was mysteries. Maybe it was identifying with the sleuth, trying to figure out the core questions as the main character was doing the same thing. Who could do it better, or faster? Encyclopedia Brown, Sherlock Holmes, and the Native American Tracker were some of my favorite icons growing up. And Batman. He’s a hell of a detective in the comics. In these stories, I identified with the character and participated in the story, which offered a more immersive experience for me to enjoy. And maybe those are the aims of some really great VR storytelling. Character identification and participation in the narrative. Seeing as most people who are being bullied, and potentially buying/using the game, could hopefully identify with the mythos of being an Underdog, I think I have that part somewhat under control. There are ways to engage that more deeply, of course. That a great deal of people cheer for underdogs as a matter of principle, that at some point in their lives *everyone* has been one so there’s the relational aspect that everyone can identify with the experience, that most stories of Underdogs end with the main character winning against great odds (or even if they don’t win they learn a valuable personal lesson), and then there’s the whole Teddy Roosevelt ‘man who is in the arena’ aspect. I think there’s a lot in the premise of being an Underdog that people can relate to that can help the experience be immersive, both inside and outside of the game. But the flipside of the immersive piece that makes it complete and full immersion is the participation in the narrative. One of the avenues for participation I see is user-created content. One of the things I’m planning on implementing into the game is a video clip collection section, perhaps in the opening Menu or add clips into the game via the game website, but the question I want users to answer is, ‘Why are you an Underdog?’. Here Underdog Tribe members will be able to tell their stories in 1 minute video clips about why they believe they’re underdogs, maybe some skills (or self-acceptance) they feel like they’re learning as a result of playing the game, and sharing successes, and hopefully some inspiration of their experiences along the way. I posted in one of my earlier blog entries here, an article that said user created content was like the holy grail of the product experience. And if the content that’s being created is all of these small and big wins that help Underdog Tribe members become their best selves, then I will have considered that a hell of a game, a successful endeavor, and time well spent.



0 REPLIES 0