cancel
Showing results for 
Search instead for 
Did you mean: 

Week 2: Oculus Launch Pad (Due July 2 Midnight)

Anonymous
Not applicable
Week 2: Oculus Launch Pad (Due July 2 Midnight)
 
71 REPLIES 71

virtuallyfifer
Protege
NEUREXPLORER WEEK 2: COMMUNITY

My goal in this piece is to build an experience that is both user friendly and touted in the neuroscience community for accuracy. We don't want to get too far into the building process just to notice a fatal flaw! 

To rally some neuroscientists behind the cause, I made this one sheet to share my vision for the project: 

pbir1onxpgs3.jpg

I decided to give our model from Week 1 a test run with Micah Blumberg, neuroscience and VR enthusiast who works with Tim Mullen of Glassbrain and has a great, albiet slightly hyperbolic podcast



He immediately said our model was incomplete. 

"Eyes are actually made of the same glial cells as the brain. I don't think the model would be complete without them". 

Back to the drawing board.. 
95chtls98pil.png


Isn't is crazy that this funny looking thing is what makes us human? 

With the addition of eyes, I wanted to push further.

How do the eyes transmit data for the brain to translate? Turns out they push this data from the back of the eye to the optic nerve, crossing over at the optic chasm, to the lateral geniculate nucleus in the thalamus (yea, that old guy in the monkey brain), and finally on to the visual cortex in the neocortex where it can be processed right alongside memories. 

Not to forget this fun fact: each side of your brain is processing half the information from each eye. 
6c38ix2mt8nu.jpg


This all happens at about the same speed as fast ethernet... which basically means we could hijack the visual cortex with a Brain Machine Interface in the advent of 5G. Wooohooo!

So how do we visually display this transfer of data? 
Inside the cords running from the eyes, we have sketched out some glial cells which fire in order! Pretty exciting stuff! 

**
perhaps instead of doing our passthrough AR experience, we will allow you to shine light into the eye to watch this process occur...***


nw2jc31n1br7.png

I'm looking forward to exploring the new Tiltbrush tools and seeing how to best apply these to our model and build our community of neuroscientists on this project!

pkshah
Honored Guest
This week was my first working on the project. After receiving my Oculus Rift and Touch, I spent some time thinking through the data model and how exactly to start building this application. Just to have it down in writing on the forum, I'm exploring different ways to build a VR CAD application for solid modeling (think Solidworks and AutoCAD as opposed to Maya or Zbrush). Much of the time I spent on the project this week was in order to get my VR setup down and to plan for the remaining weeks!

Here's a general outline of how the project will progress from this point:
  • Build simple room and menu interface (heavily based off Tiltbrush's two-handed menu)
  • Add solid modeling features one by one
  • Test with users
  • Refine
I'm looking forward to actually getting started with the build next week!

tyruspeace
Protege
Week 2! Exciting! I'm finally back home in Texas and... I've been playing a lot of catch-up with all of the work and home things you get behind on after being out of town for a few weeks.

I only really got a couple days of work in on my VR project this week, so I tried to make the most of them and wrap up the art test. I'm going for a relatively flat style, and I need to get that done enough to test in VR!

Pretty Rainbow Legs
First off, more farting around with pointless pretties. I gave my little "meep" critters gradient limbs! Since their limbs are line renderers while their bodies are textured, I had to get colors by peeking at their UV coordinates. It... wasn't very difficult. Their body texture's a grid that I randomly shift their body parts around on:
618tzlsa84gu.png

The new limbs are a big improvement, because rainbows:
jgcwmt8qtlza.png --->5n40epf41lkh.png

Background City Generation

The next part of my art style test was to do the actual environment. I proceeded to make a bunch of background towers to fill my level and dumped them all into the same Blender file to prepare them for use in-game.
jyol1j5an03k.png
Unlike the meeps, these models need to have a smooth gradient where they meet the ground, so their texture looks like this:
bisth1jgu6ih.png

Each tower is made of a mesh and a pile of "radial prefab placers" that spit out windows as randomly or not-randomly as necessary. They all use the same material, so the background can be batched into one draw call. While the main play area will be mostly flat, the background is a bumpy mess of hills to make things interesting. I randomly munch through a grid, displace individual towers at random, and rotate them to match the ground normal.

That topsy-turvy placement really helps them match the style of the rest of the game.

04hlkjyaeneo.png
It performs well! I'm planning to merge all of the meshes for each individual tower. Batching's working so well already that I don't know how beneficial that will be!

Given the flat style, I went with an aggressively contrasting linear fog to make depth more apparent to the player. So I'm just tinting everything with the depth buffer. It looks nice so far! I'm hoping that will feel good in VR, too.

Completed Mockup/Art Test
Then I modeled a Gear VR controller so I could have a proper mockup. I'm planning on having this spew out all kinds of exciting stuff in-game, but a static model will do for now.
k0ok6913axou.gif

It's reassuring that things actually look cohesive now that I've designed a matching background style. I still have a lot of other level elements to design, but I'm now confident that I can make this style work!
t9p4461ne757.gif

Anonymous
Not applicable
bgbrzzhwvsht.jpg
Preliminary spherical sketch test on iPad Pro - Ovary and Fimbria

Oculus Launchpad Dev Blog 2: This week started off with more experimentation and time spent inside the rift. I have continued to research workflow pipelines and styles. I'm nervous spending so much time on this aspect, but I feel like getting this part right from the beginning will save me a lot of headache down the road. I also had a very productive meeting with my programmer concerning the feasibility of implementing some of my ideas regarding biological process occurring over time. We discussed some of the "magic" that I would like to happen and how I would need to create my assets in order to achieve the desired effects. 

Researching the pipelines, I experimented more in Oculus Quill and Medium.

In Quill, I have been perfecting my brush stroke technique and explored what works in terms of scale and placement of elements for my VRspace. I have exported various cubemaps and imported those into After Effects Layers using the Mettle Skybox plugin (now Adobe). I am testing out animation techniques on these layers to create more depth and animated interest in the distance without having to rely on realtime animation renders for these effects. I have put the idea of exporting an fbx model of my quill work and importing into Unity on hold for the moment. I may utilize this pipeline for small features, but current experiments have been clunky. 

In Medium, I have been generating organic models and testing the new export features. Medium now lets you export lower poly meshes of your sculpt. So far, these tests have been very fruitful. I still believe I will need to use zbrush or meshmixer to further reduce the meshes, but it is nice to see what comes directly out of medium. I am encourage by this work.

I will ultimately be combining video/2d-animations with 3d static and animated assets. I love this hybrid approach and plan to dive deep this upcoming week into the effective blending of the two. 

My search for a spatial audio specialist has also begun and I believe I will be able to use a local vender for that work. One of my goals for this project is to try to use as much local talent as possible, so I was glad to hear this news. 

I will be traveling back to the Bay area this holiday week on a pre-planned family vacation. I plan to use this time to creatively bounce off of the constraints I am discovering in my research and sketch out a detailed VR storyboard vision and asset sheet. When I return home the following week I have cleared my calendar and will be in full asset production mode. I hope to have some great visuals to post!

Until then, have a safe and happy holiday this week! 

dilanshah
Protege
For this week I worked on gray boxing the vertical garden scene with some really simple geometry.  For VR one of the most important effects is a correct relative spatial layout, other aspects like sound are not on my radar yet. Also I made progress in the domain of interaction ideation as well––I didn't reshoot the 360˚ background yet but I consider that a relatively low priority. 

w6kk7wzyup1q.png


At this point, I'm asking myself how I can be ready with the core experience by the finish date... I can't hide my technical debt in the category of creating geometry and 3D modeling. So I've set out a plan to make the first scene, the vertical garden. I've populated a development tracker tool with known tasks and will be checking in with my mentors in the coming week. 



JDakotaPowell
Protege

Week 2



Kungfu-mama (J Dakota Powell) loves the production tips that
pepper this blog – helpful and much appreciated! Endorse VRTK, that toolkit
makes the dev process much easier. If anyone needs to toggle between buttons
and pointers – ie, 2 pointers on one controller, lmk. Wrote a C# script to
enable that functionality in VRTK & can share.



This week, I dove into modeling, texturing and building the
environments of the Red Flute RPG (Rift) and will continue on this path for a
few weeks. Need to populate scenes first.



Location 1: The tea-maker’s shack in a swamp.

adtjtelcxz5x.jpg

Still to model: the witch, cranes, turtles, boat, and shack/inside,
etc. – but the scale of the exterior scene, look and tone in Unity feel right.



In midst of modeling the crane atm, then I’ll rig and animate
the bird. The red-crowned cranes surrounding the witch’s shack will foreshadow
the flute story-wise. The witch also needs rigging and animation.

5wbfc38xi5oh.jpg

Finished texturing the main temple and tested it in Unity.
Again, the inside room poses next task. In the meantime, I’m constantly dancing
between Maya and Unity to test for scale. See Main Temple:

o5qnjgznkbyp.jpg

Rarely use photogrammetry assets because they’re too dense
and the edge flow is chaos. But I did find a Buddha from the Yungang Grottoes
near Datong. Although the Buddha was scanned for archeology purposes, the model
is usable in Unity. I may retopologize, but at least I can open the asset and
work with it. See Datong Buddha in Unity:

wyreecrt71od.jpg

Finally, I tested the Gaia/Unity plugin because displacement
maps in Maya are a nightmare re creating complex terrains. As long as I can
customize the stamps (rivers, rocks, mountains, etc), this tool may be worth
using for this project. For example, I need to create a wall of cliffs for the
Buddhas.



Hopefully next week, the cranes will be flying, cawing and
pointing their toes! 

Anonymous
Not applicable
This post was also published on Medium

It’s the end of week 2 already — where does time go?

If you are new to this blog and want to find out what the Oculus Launch Pad is and what I do in it, check out Week 1 post.

Last week, I set myself some objectives for the week ahead, and unsurprisingly I’ve met some of those but not all. The one thing I’m really trying to stick to is the weekly video and you can watch Week 2 Vlog below.

https://youtu.be/JEyvrahQ5Gc ;

Vlog Week #2 (you can subscribe on YouTube to be notified every week)

Progress this week

  1. In order to film my 360 movie on capoeira, I have to identify local capoeira groups, in the hope that they would let me film them. I am considering filming my own group back in London (my fabulous Senzala Capoeira London family), but that would be for a future chapter if I manage to get funding. I am pleased to report that after almost 3 years of break, I have restarted my capoeira practice! I attended a class at Stanford University. The group was very friendly and welcoming, which is a great start. I was completely out of shape and 3 days later I still have sore legs. If you ever tried to play capoeira, you know what I mean 😉
  2. I was hoping to write a full project plan this week and the progress I made was to establish a to do list to complete the project, with 2 phases: 1) to submit a prototype for the Oculus Launch Pad scholarship competition and 2) to finalize the VR experience beyond the Launch Pad. It’s a good starting point and I aim to turn it into a proper plan next week.
  3. I believe that work is no fun unless you treat yourself to something exciting every week and my treat this week was to play with my new Samsung Gear 360! I published the unboxing video, you can see it below. The camera is super pleasant to use and is very promising so far.
https://youtu.be/y7o6d2-GcN0

Samsung Gear 360 (2017) unboxing

This week, I also attended the Female Founder Conference 2017, organized by Y Combinator. While this is not directly related to VR, I mention it here as it helps me in my journey as a female creator. The conference was an afternoon filled with tales from female leaders and founders about successes, failures, funding tips, all delivered with an energy and humor that I had never seen before. This was truly inspirational and I am grateful that I had the freedom to attend this event. This may resonate with the many women in the Oculus Launch Pad as well as the very dedicated Oculus Diversity team. All the talks are available on YouTube.

I took my new Samsung Gear 360 to the conference and had a little fun with it!

Tiny Planet variations of the Herbst Theater in San Francisco
Equirectangular view of the Herbst Theater in San Francisco. Full house!

What I’m planning for next week

  1. One area that I need to invest time in is audio. I need to become more knowledgeable about optimizing sound for a VR experience and I believe it makes a major difference. I will research the area and identify a few microphone options.
  2. The purchase of a microphone will go straight into the budget sheet that I will establish, to keep track of my costs and assess how much funding I need to request in my scholarship application.
  3. I’ve mentioned this before and it’s very important, I will finalize my project plan!

Well, that is, if Independence Day and life does not get in the way…

Happy 4th of July everyone!

Follow me if you are curious to see how week 3 goes. Thanks for reading!

meredithwilson
Protege
Week 2 Dev Blog 

Game Mechanics








The primary flight mechanic for my GearVr game is very nearly in the bag! It's definitely all thanks to the Easy Input for GearVR asset that I sang the praises of on the fb group. I know it’s early yet in the development cycle but I really can’t relax until the core game mechanics are done. The way I see it art assets can be swapped out easy, but if there are no mechanics then there is no game and it's way worse to have a deadline closing in and not have functioning game mechanics than it is to have not have enough art assets (fast art asset production scales really well, e.g. hiring multiple artists, the same can't be said about game mechanics and hiring more programmers). Anyway, I'm halfway done with my mechanics so yay for feeling more relaxed now!


The next mechanic I'm tackling is definitely more challenging since the dialogue system I designed for this game is a novel one. It will borrow heavily from inventory management systems you usually see in RPG type games. I plan to use Playmaker’s ArrayMaker to get this prototyped out, my goal is to have it done by the end of next week. With my dialogue system I’m hoping achieves something that I think is lacking in most games, and that's a window into the emotional history of the character you’re playing. I’m not a fan of the “your character is a tabula rasa create them as you go” model that’s more or less taken over many fps/e games aimed at US markets (looking at you Fallout). It’s usually sold as a more choice and customization is better kind of thing, but honestly I think it’s just another manifestation of how most Americans are uncomfortable with the idea of having no control over who’s shoes they are put in (this is definitely a challengable position, I'm just enjoying my view from the soapbox for a moment). In my game the character who's shoes you're put in is the character you play, no choice of dialogue here friends, HOWEVER there will be another component of the dialogue system that does give the player control over another aspect of how the player presents themselves. We’ll see if it has the impact I intend, only play testing will tell 0_o 

Art/Environment Assets


My good friend Cameron Bronn very generously offered to model the one building interior that my game requires. We haven't worked on a project yet, so I'm very excited that Oculus Launch Pad has offered an opportunity to do so! Check out his website https://cameronbronn.wordpress.com/

I'll be modeling the tunnel terrains for my game. Simple tunnels are actually really easy to make in Blender, all you have to do is string a bunch of metaballs together however you want, flip the UVs, export as .fbx to Unity, and you got yourself a tunnel. Definitely simple enough for a Blender novice like me to handle  😄

Music Assets

I'm looking forward to playing around with some sounds this week with a friend of mine who's been getting into music production. The score of my game will be more evocative than melodic, so ideal for a musician that dabbles in experimental music, which is what my friend is into at the moment. Should be fun, I don't usually get to hang out in music studios that much :smiley:


That's all I've got for now, until next week!

tifa_ain
Protege

Week 2

 

This week has been about experimenting with a wearable rig for actors. Lots of searching for workable solutions and ordering and sending back parts. I found a rig from a company called sailvideosystem.com, but I don’t like that the emphasis of it is from behind the actor. It seems like a solid system, and I know someone who uses in for underwater diving and 360 video - but I don’t think it’s what I am looking for.

I started with a quick sketch of how what I was looking for could be done.

initial sketchjpg

 

Basically, I want an articulating arm to support the gimbal/camera. It’s important to me to have options for closeness (hence the adjustable arm) because I want to focus on intimacy with characters. I also want the ability to move through a space with the character. AND I want the actor to feel comfortable enough to wear the rig without it inhibiting the performance.

 

I tried a chest mount, but it couldn’t support the arm.

chest mountjpg

 

I tried something called the peakdesign - which has a quick release plate (that would be cool!), that attaches to a belt or a backpack - it’s made for carrying around heavy cameras. Because it seemed to be more naturally integrated into the way people move, I wanted to try it. But even at it’s most stable - on my belt - the arm was still too heavy to work.

backpackjpg

 

 

So, I started connecting things to the Glide gear chest mount. The 1st arm I used kept slipping through the threading - either at the arm or the point of connection on the glide chest mount.

view from the backjpg

 

 

But, it seemed like a good path, so I ordered another, sturdier, arm.

When attached firmly, the Glide chest mount was able to support the arm and gimbal.

Some success!

Here are all the parts:

all partsjpg

 

Once I got the hang of using the gimbal (weights matter so much lol), I checked the footage in the Samsung VR. Non-working gimbal= VR sickness big time. Working gimbal - looks pretty nice. Here’s a link to me walking around with the gimbal working.

 

https://samsungvr.com/view/gNntLA2HWp3

 

So, this next week will be spent actually moving with it on, trying some conversations and seeing how it looks with more than one person in the shot. And hopefully getting a “real” actor to wear it and give me feedback. And yeah, I know the lamest thing about it is how big it is...Lots of characters wear vests though, right?

 

Introspective note: what if this is all I work on for 10 weeks? What if no idea pulls on me as strongly as trying to accomplish this rig?

Also - scrolling through these posts above is totally inspiring me - thanks for sharing all!

LynGaza
Explorer
It was really helpful reading last week’s posts. I wasn’t able to read them all, but even reading some of them is a great way pick up tips and learn about process. At the end of last week, I was pretty settled on an idea. Then this week, I fell down the rabbit hole of researching how to realize it and now I’m not 100% sure.

The Idea:
Exploring ancient sites as the original computers. In many native cultures around the world, buildings were created in relation to mathematic principles observed in the sky and in nature. From Stonehenge and other neolithic sites in the British Isles, to the Mayans in Central and South America, evidence of a relationship between astronomy and architecture can be found. The experience would be educational, connect cultures from around the world in a common history and connect our modern day computing with early civilizations. 

Execution:
Create an environment similar to the original sites in virtual reality and simulate through animation what happens visually during astronomical events like the summer solstice. Allow the user to learn details about the building, cultural significance of the architecture, and to find objects that would typically be discovered at these sites. Once discovered these objects can be explored as 3D models. 

The Problem:
How can I create or record a building with interactive animation in the time we have to demo and with no budget?

The Rabbit Hole:
I started researching and talking to people. My friend Zenka suggested that I try to find some existing 360 footage of a site and ask the creator for permission to use it in the project. Option B, get somebody to model it in 3D, so users could move through the space. Better yet, find some existing models and use them for proof of concept. What I really want is to have a photorealistic version, that could be explored in 3D. 

I talked to my contact Tyler at the Thinkwell Group, and he suggested photogrammetry. Further research uncovered the fact that both Stonehenge and a Mayan temple location have been scanned this way. There was even a site that offered a download of the 3D files of a Mayan temple, but the link was 404, so that turned out to be a dead end.

Further down the rabbit hole, it turns out there are VR experiences of both Mayan temples (http://realities.io/) and Stonehenge (http://www.voyagervr.com/stonehengevr). In reading more about these projects, it seems like it’s difficult and expensive to A) process the files and B) get a high enough resolution for mobile. Suddenly, I’m feeling like my idea is less original and harder to achieve. Stepping back for a minute, to report on my activities this week.

Activities:
  • Worked on fleshing out the idea and best approach to execute it
  • Researched photogrammetry 
  • Found some software that can turn 2D digital photos into 3D. Pretty cool, but need a PC. I think I’ll probably get a PC before the summer is over.
  • Talked with Tyler at Thinkwell Group about how they’re using VR/AR for their experiences and attractions and general pros and cons of VR. Interesting note that VR can be projected into a cave like environment and viewed with 3D glasses
  • Attended AR Adoption panel at Upload. Notes from the AR panel: AR can be projected, There are two issues with making wearable AR glasses - optics and user comfort (everyone’s eye placement and field of vision are slightly different), With VR the whole world needs to be created. With AR, half of the equation is in place.
  • Received and installed skybox plugins from Mettle
  • Researched and downloaded more apps: Zero Days VR, Through the Masks of Luzia, Chernolbyl VR
  • Reached out to Unity dev friend 
  • Installed Unity on my desktop computer 
  • Contacted friend who works at the British Museum regarding VR projects in museums
  • Watched Alex Kipman (creator of HoloLens) Ted talk on "bringing holograms into the real world"

Notes & Conclusions:
  • I like the idea of using photogrammetry and would like to go in this direction, if possible. Seems like a great tool to preserve history and culture.
  • I haven’t fully given up on the Ancient Computer project direction, but it still doesn’t feel totally there. I need to figure out a way to realize a prototype that feels “done enough” but is achievable in the time frame. I’m sure everyone is struggling with that one.
  • The number of platforms is frustrating. There were two projects I really wanted to see, and one was on Vive, the other on Rift.
  • Need to start doing more hands on stuff as I feel this will help my idea progress faster
  • Still having trouble with the controller losing contact and the phone overheating. It’s difficult to spend more than 5 minutes in an experience before one or the other fails.
  • Chernobyl VR is pretty close to what I’d like to achieve. They use a combination of photogrammetry, 360 photography and audio. I found it to be effective at telling a story and immersing the viewer in the world.

To do’s for this week:
  • Talk with Dana about Unity and what’s possible
  • Continue to develop Ancient Computer idea
  • Sketch, mind map and storyboard
  • Create project one sheet
  • Continue exploring the possibility of using some existing models
  • Shoot test footage and put it through pipeline using the available tools
  • Finish Unity tutorials I started
  • Explore combining 360 video and UI in Unity
  • Do Mettle Skybox tutorials
  • Watch YouTube Creator Academy videos