Forum Discussion

KingHedleyiii's avatar
9 years ago

Oculus Launchpad Dev Blog- 7th Entry- DevLog 757.007

In case you're seeing one of my posts for the first time, I have a bit of a structure to my posts where I typically deal with 6 areas related to VR noob development, and development of my VR experience called Underdog. The 6 areas are Scripting, Audio, Made with Unity, GitHub, Storytelling/Narrative, and Immersive Experience. It's my belief that if I stay focused on these areas, not only will I learn a great deal, but I'll have a well rounded development for the experience I'm working on building. Today's post will be light on  Immersive Experience because the machine says that if I include it, I'm 2000+ characters over the limit. So I will add that conversation to the next entry post.

Scripting

  • Why the heck do we need a function?

  • A Function is a set of lines of code that performs a particular job

  • A Function has 3 main parts: Inputs, Processor, and Outputs; Inputs can be numbers, strings, or any other type. Processor is the function itself (actually a set of code lines). Outputs are what the function returns when it has finished doing what it was told to do.


  • When you don’t want the Function to return anything, you write ‘Void’ before the Function. (See line 19, before the ‘Shoot’ function)

  • Parameters: it takes something, does the task, and then returns data


Audio

http://superpowered.com/3d-spatialized-audio-virtual-reality?utm_content=buffer50549&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer  

How 3D Spatialized Audio Bottlenecks Virtual Reality Video- “Virtual Reality will create another computing boom. VR is insatiable in demand for better batteries, more bandwidth, more processing power.”

  • VR is an unrelenting, greedy God, shamelessly demanding the entire capacity (and MOAR!) of your smartphone’s CPU, GPU, RAM and memory bus for its needs.
  • Samsung Gear VR throttles back performance when it reaches thermal limits.
  • VR must use spatialized audio enable true immersion. 3D spatialized audio allows for VR users to localize and perceive sounds in 3D space around them, just as one does in meat-space.
  • Sound occlusion is a very hard problem to solve in terms of computation power. Whereas in global illumination you may consider the movement of light as effectively instantaneous, sound is moving very slowly. Therefore calculating the way sound actually moves around (as waves) in a room is not feasible computationally. For the same reason there are many approaches towards spatialisation tackling different problems to various extents.
  • For VR to fulfill its promise, it must offer users spatialized audio yet spatialized audio increases the demand for computation and power by orders of magnitude.
  • Hence, within every VR app, audio competes with video rendering and physics engines in a resource war for access to computation.
  • Can categorize current state of spatialized audio in 2 types: cinematic and object based
  • Cinematic VR audio used for pre-generated video and film content (easy to stream); Object based VR audio used mostly for games

  • Cinematic spatial audio in VR is implemented with ambisonics technology. Ambisonics has a fixed number of continuously streamed audio channels with 3D audio information.
  • cinematic spatial audio the sound of the chirping bird circling our user is distributed amongst the virtual speakers in the sound field. This is how audio in Google Cardboard works.

Object based Spatial Audio looke like this:

  • in object based VR audio, the virtual speakers aren’t fixed in number or in their position relative to you. Every sound source gets its own virtual speaker — for example, if the chirping bird is flying around you in a virtual space -- it is as if a speaker were affixed to the bird.

  • The more virtual speakers, the more immersive the audio, the more immersive, the better the experience and the more computation is needed for audio processing and, of course, the hotter the device gets. The hotter the device gets, the less efficiently it processes.


Made With Unity: madewith.unity.com

The following article: ‘Building a Pine demo in 3 months’ seems like a pretty similar experience to what we’re doing with the Launchpad experience. Tight time frame, lofty goals, small teams… Here were some of the major takeaways

  • The team of 5 got together in the same room only once a week for 2-3 hours just to keep each other up to date on progress with assigned tasks. For this, it seems like use of Trello or Hack n Play (basically Trello for game development) would be useful without having to be in the same room with one another, but I absolutely understand the attempt at a cohesive team comraderie that being together physically can help to facilitate

  • Tight online planning using Google docs and Sheets- write everything down. Always. Keeping up with all the information that needs to be tracked during game developments is insane, so having a touchstone where all issues could be addressed is key. Which brings up the issue of designing optimal workflows for teams, team leadership, and creating cultures within teams that hold people accountable for tasks without being an insane taskmasters that no one wants to work with. Could definitely be an issue to take a look at in Harvard Business Review articles and book series regarding team leadership, especially in the online collaboration realm most game dev teams seem to find themselves in.

  • Quick decisions, quick iterations:  too often someone gets stuck on a task because she or he wants to deliver it pitch-perfectly the first time around. That's not gonna happen, so put something in the game and we'll see what we can do to improve it as fast as possible. I see this theme coming up over and over in game development. It’s basically the ‘Fail Fast’ motto. Just don’t get caught up in making something perfect the very first time you make it. It’s a process, and you have to put a baseline out there first before it can ever be improved upon. So just make sure to get *something* out there ASAP.

  • With regard to Underdog, one of the issues that I’ve come across with not getting too hung up on what gets out there first is developing the in-game surveys that users encounter that will lead to a feedback loop of how they engage the game. We’re basically assessing outcomes in 2 areas: 1) improved knowledge about a bullying experience, and 2) improved knowledge about actionable steps you can take in the real world in order to improve a situation where a user may be encountering a bully


GitHub for Noobs

Hanging out with Travis on GitHub for Noobs YouTube channel: https://www.youtube.com/watch?v=BKr8lbx3uFY

  • Adding/removing lines of code in Git and what that looks like via Atom on GitHub

  • Using the Atom terminal window (the terminal GutHub recommends), once you remove a line of code in your source control, that line will be highlighted in pink. The new line of code that you added will show up in green.


  • You may notice in the text box on the lower left hand side that Travis is typing a comment about the change he made in his Git. I like this comment feature because in this summary box you can give a plain language explanation of the changes you made in your code. Pretty cool.


  • On GitHub, when a pull request has been sent, it can only be accepted or denied from within the GitHub website (meaning, you won’t be able to accept/deny via Atom or the terminal where you sent the request from)

  • This is pretty much where he ends the tutorial with this lesson. In lesson 4 he says he’ll be operating directly from the Command Line in the Terminal window, so that should be interesting.

  • I haven’t been using a lot of GitHub with Underdog, but seeing as how I’m such a new developer I’m sure it will be a valuable resource to have so I can see the coding structures of many of the more advanced coders working on my team. Having the opportunity to make branches on my own without disrupting the Master Branch that has already been created gives me an opportunity to practice skills and not have to worry about ‘breaking’ the whole thing. I definitely think sharing a repo can be a very useful exercise, both in project maintenance, and in skill development


Storytelling/Narrative

Voices of VR Podcast #415: ‘Peal’ is an Emotionally Powerful Story about Selfless Service:

  • Produced by Google Spotlight Stories

  • Patrick Osborne, director of Pearl is the interviewee

  • Pearl is conceptually like an automotive version of Shel Siverstein's The Giving Tree

  • There’s a new interactive version of Pearl that isn’t available, obviously, for Google Cardboard

  • Osborne said for the first 30 seconds, you’re just given time to orient yourself with the car, and to some extent the structure of how the story will be delivered. Meaning, if you’re inside a car, you generally always know where the steering wheel will be so there’s no need to reorient you to a new scene landscape. That way, there aren’t any new rules that need to be discovered by the user for intake of the the narrative, as can be pretty standard for VR for when a user enters a new environment; I also found his statement about letting the user orient themselves to the scene by not having anything to do for the first 30 seconds or so very intruiging. I forget which other post in this series talks about that technique as a best practice, but I’ve definitely seen it mentioned before where, in VR, you really should allow people time just to adjust to the scene and the scene controls before you ask anything of them within the story/game

  • There’s no equivalent in VR of: going to the movies, the lights go out, the trailer plays, the movie comes on after. That’s the process of going to the movies and you know exactly what to expect. VR hasn’t reached that standard of expected behavior yet where people know where to look and what to do once they’re inside an environment

  • Osborne says better authoring tools are needed for VR to help be able to pitch story ideas faster

  • Branching a story is interesting and has possibilities but may become very expensive to produce that an audience might not even see

  • Something Osborne said hit home for me as well. He was talking about how he loved the Apollo 11 VR experience because he felt, as he was doing it, “hey my dad would love this”. And as he says this he mentions the possibility for the inter-generationality of VR. That you have experiences that ‘play’ better, or are more interesting, for specific age demographics. He believes there are opportunities there for content to be shared across generational lines, which is an opportunity not always available when a new technology is introduced.

  • Like if you have a Vive, you have 6 experiences downloaded on your machine, and you keep several for the different types of people you know will try it out

  • Audience is still learning how to watch VR



No RepliesBe the first to reply