06-29-2017 09:30 AM
06-29-2017 09:45 AM
I made a lot of progress this week. I finished the Unity Courseware and began work on the game. I bought several models, including models for the “regular” enemy crabs, the Crab King, and the underwater plants and skybox.
I started by working on the crab’s movement, until I got it moving back and forth in a straight line with animation. Then I added hats to differentiate the “regular” crabs from one another (Pawn, Knight, Bishop, Rook). I implemented a spawning system (for now, pressing spacebar starts the game), and a system to advance to the next crab after defeat.
Next, I implemented the fighting system. I used the BladeSmith Melee Combat add-on from the Asset Store to set up the claws as weapons and the player as the target. Then I did the same the other way around, setting it up so that a mouse click performs an attack and damages the enemy. I added animations for damage and death using the system as well.
I also updated the movement system so that the enemy crab moves in a semicircle around the player. There are occasional bugs, but for the most part the math works. The speed of the crab is dependent on its level (Pawn through Rook).
I added the ability to block, even though it is currently controlled by the right mouse button. A cylindrical shield becomes activated when the player right clicks and stays active for 2 seconds, then lowers. Eventually this will be controlled by the position of the controller in space.
Finally, I added some bubbles to make the scene feel more like it is taking place underwater!
I still have a long way to go, including setting up the underwater scene with better/more varied plants, gently moving the plants to simulate underwater currents, and implementing the true control scheme and UI. Right now the camera is controlled via the keyboard and the mouse attacks. The next enemy spawns immediately without any UI or cutscenes. Finally, I need to implement the Crab King using a separate model and stats.
06-29-2017 10:01 AM
Oculus Launchpad Dev Blog 2: Getting Technical Without a Technical Background
This past week, I made a lot of headway reaching out to new contacts in LA, as well as digging into learning new technical tools and skills for my project. I tested out two cinematic VR platforms, one was called EEVO which helps build interactive VR experiences simply. It is a cool platform that helps build teleportation and branching narratives, but the drawback is that it only publishes to their app unless you hire them to build out a personalized app for you. It may be a good option for helping build a simple prototype, but it’s not as customizable as I’d like for my project. The other platform I tried out is called Headjack. This site helps customize and build VR applications which you can export to many different platforms. It also simplifies the design of menus for cinematic content, and it is possible to integrate with Unity for further customization. I spent some time exploring this for my project, and I think it could be a really good fit for what I’d like to build. I just need to learn a bit more about integrating cinematic content in unity before I can use their templates and API. Also a lot of this language is new to me (API, SDK, APK, JDK… maybe someone can help me understand what those mean better, but I am up and running with some practical knowledge on how to use them).
The rest of the week I spent on Unity tutorials. I don’t remember who, but someone posted this very useful playlist of tutorial videos for Gear VR https://www.youtube.com/playlist?list=PLojp5Kr-sMX_nabw9VmqNBc_FoiJC2kR0&app=desktop. I’ve been working my way through that, and built my first functioning test app on Gear VR! It feels like a huge accomplishment, even though it only consisted of a plane and some cubes… Baby steps. Next I’ll be working on testing out 360 video and stereoscopic video integration.
06-29-2017 06:46 PM
06-29-2017 06:57 PM
06-30-2017 10:17 AM
Currently, I am exploring https://keen.io/ as a metrics provider to create programmatic metrics like gaze into the application. Assigning objects in a scene some method to send event data. I think what would be great would be to just have almost every object detect when it's being looked at and then I have access to all of the data to decide what's an effective gaze. If anyone has any suggestions on how to obtain data from a VR environment (I'd imagine fast writes given the unpredictability of the VR view), I'd love the suggestions.
06-30-2017 01:34 PM
06-30-2017 01:39 PM
This was the first week the team really dove into the project and began tackling it head on. After some initial discussions, Tony, our artist and creative director began cranking on some work look iterations, painting some rough sketches of what our "base" might look like. . Remember from last week that we are building a VR competitive multiplayer game, so in this case, the base refers to the component of the level that must be captured by opponents. The world is going to be technological fantasy with ancient tech so advanced that it lends a magical aesthetic to the world. The art style will likely be colorful and bright, based off of a low poly/voxel feel. This will give us the best flexibility going forward, and allow us to create interesting unique 3D pieces, while leveraging the plentiful amount of low poly assets in the Unity Asset store.
Larry, our level designer began working on a sketch in 2D of the playing field and started creating a low fidelity 3d mockup of the final level. We want to make sure our level has many of the features that you would find in a triple A multiplayer game like Overwatch or Call of Duty. I personally believe that level design and game balancing is where we can make the biggest impact in the shortest amount of time, given the size of our team. Our plan is to integrate the level design into the environment scene next week, see it in VR, and continue doing iterations on it as the project proceeds.
On my side, I began working on the basic multiplayer networking infrastructure required to create a team based multiplayer game. In our game, one core objectives is to pull switches on a monolith in the center of the level of the map in order to summon a bad ass monster. I began by setting up the multiplayer component of that mechanic. I also worked on a tracking spreadsheet and prioritized features. We had to cut more than half of the features I originally scoped, and will probably still have to cut more, if we want a fun, balanced experience.
One of the things that is so nice about Unity, since the scope of our project is so ambitious, is that there is a free or paid asset for almost anything imaginable, to help you realize your vision. Today I'd like to share some of the tools I've incorporated into my workflow over the past few months. There is still a mountain of work to be done before we have a playable version of our product, but thankfully there are many free assets that are available online that make it feasible.
Photon is recognized as one of the most user friendly ways to build a multiplayer experience. It's extremely flexible, and free up to 20 concurrent users! It is very simple to use for what it is, but remember to at least double your estimates for any product with multiplayer interaction, maybe more if it's your first time. If you are planning on scaling your product to thousands of concurrent users, or need server authoritative logic, than you'd probably want to use something else, however it's perfect for a smaller indie project. Especially for beginners.
VRTK is a free opensource toolkit that is cross compatible with both the Vive and the Rift. Exactly like it sounds, it provides a set of tools that allow you to quickly design VR experiences. Some of my favorite are built in interactions include grabbing, and interacting with objects, as well as teleportation. The people that work on it are also super nice and responsive.
Behavior Designer is like Play Maker for AI. Basically, it's a node based visual scripting tool that allows you to build complex AI behaviors using a behavior tree. If you are a non-coder they have supplementary add on packages that make it easy to mock out all sorts of behaviors without writing a single line of code. If you are a coder, all of those actions I just mentioned are well documented with source code included so you can modify it and look at how everything works! The UI/UX is intuitive and easy to use, and stable, unlike some other free solutions out there. It's not cheap, but if you are trying to do any sort of complex AI behavior in your game, I can't recommend it highly enough.
Traditional animation is usually forward kinematic, each individual bone and rig usually needs to be carefully positioned and animated. For example to swing a characters arms out, you'd have to move the arm, the upper arm, the shoulders, the forearm. While these types of handcrafted animations usually look better, they are time consuming. Inverse Kinematics (or IK) is an animation system that works by apply motion to a single part bone on your character and allowing the rest of the character to follow suit. A simple example would be simply moving the hand of your character outwards to swing their arms out. Final IK is an IK solution that also features a VR component for the Rift and Vive. In this way, you can create realistic looking character animations for your player avatar, creating a realistic sense of body presence. Many successful VR titles published in the Oculus store actually already use a variant of this unity asset. (You may have seen the developer's work in Arizona Sunshine)
Of course you can also use final IK to animate pretty much any character in your interactive narrative or game.
That's all this week, see you next week!
06-30-2017 01:47 PM
06-30-2017 05:03 PM