As I write, I'm at work for Virtually Live in Eichenreid, Germany giving demos to people (mostly kids). I feel strongly that with VR you must don a headset to realize what it is.
The core idea I submitted for Oculus Launch Pad (here forth, OLP) was centered around powerpoint for VR. The effectiveness of immersive experiences for communicating future businesses/products*. I will not be pursuing the creation of such a tool yet, for reasons I'll touch on later.
Born out of this ambition to create a simple way of putting a presentation together in VR, my OLP project is called "Project Futures" a series of interactive experiences each one centered on a core idea, business, or product.
The first experience I am creating is centered around vertical, climate controlled farms. For now, I've rested on the name The Future of Farming for Gear VR.
This week I shot 360˚ video which I will loop for the setting in the background of the interactive grove containing vertical rows of shiso, chard, kale, basil, etc. Following Sarah Stumbo's awesome tutorial, I placed a shipping container shaped grove inside of this 360˚ video. I learned that the 360˚ video I shot was captured far too low to the ground and therefore it probably won't work and I need to reshoot. On a separate note, one of my obstacles with this project is designing user interactions with the crops (i.e. what will happen when a crop is selected?). One such answer is that a user may chop it, bite it, or so forth, and my ability to design models and permute models is beginner level. Further work included locking in a couple of mentors for the project, one works heavily with Houdini and can advise on workflow and the other is a talented 2D artist with a lot of helpful insight (worked on a hackathon project for Microsoft Hololens recently––which was based on Yu Gi Oh).
Coming back to the idea of a presentation experience creation tool, I thought two things. Chiefly, its scope is too large for a good proof of concept within the time constraints of OLP. Secondarily, it's probably not a very good fit into the defined categories currently hosted on the Oculus stores and the community is really looking for entertaining content right now. Therefore, I thought, well.... if I don't make the tool, then what would people maybe use this tool to create, to communicate, to share? Boiling things down, I realized a few tenets of virtual reality to highlight 1) is that one is cut off from the real world if settings are in accordance (i.e. no mobile phone notifications) and therefore undivided attention is made. 2) Immersion and presence can help us condense fact from the vapor of nuance. The nuance being all of the visual information you will automatically gather from looking around that you would otherwise not necessarily have with i.e. a textbook.
Therefore what I would make with such a tool is content that requires undivided attention and perhaps the communication of dense information. My world-view is that most people find it fun to learn.
* To further expound on drivers for such a tool as necessary. I want to claim that in any endeavor of learning about a subject, we first look for intrinsic motivation. We must first pique our interest and learning ensues automatically. A common alternative would be the powerpoint presentation which is linear in nature and defies its own exploration in any other way besides forward and backward. In an environment, one can walk/teleport around freely.
For this week (week 2) I worked on gray boxing the vertical garden scene with some really simple geometry. For VR one of the most important effects is a correct relative spatial layout, other aspects like sound are not on my radar yet. Also I made progress in the domain of interaction ideation as well––I didn't reshoot the 360˚ background yet but I consider that a relatively low priority.
At this point, I'm asking myself how I can be ready with the core experience by the finish date... I can't hide my technical debt in the category of creating geometry and 3D modeling. So I've set out a plan to make the first scene, the vertical garden. I've populated a development tracker tool with known tasks and will be checking in with my mentors in the coming week.
This week I added Alex Ness to the team, he's a student at UCONN with an excellent practicing mindset with regards to rendering scenes. His craft is to create scenes with lush greenery so I shared documentation with him and have three specific plants for him to stage (which are known to thrive in vertical housing). In the engine, I'll go forward with models he provides integrating them into the scene. I'm particularly keen on sharing with all of you his toolset which is Forester for Cinema4D. Makes procedural plant growth a little more accessible. Let me know if you use this already or the suggestion proves useful, I'm sure I'd enjoy hearing about it.
So I went ahead and built some personas of people that would actually find the thing we're creating useful. A VC, a restauranteur, an entrepreneur, and a doctor, each with a different Ethos for supporting farming innovation. It's helpful to hold in mind a few of the people that might see something specifically valuable in an experience like this first one. The components of the persona are "Attitude Summary", "Persona Description", and "Experience with VR".
Let's outline an example of each respectively: - "Local Produce Everywhere” - Son of a Wisconsin cherry and apple farmer, Matt’s background is in technology and investing. - While I don't have much experience with VR, I'm continually learning about new technology in the aims of communicating the future of farming. VR is a way to communicate with would-be collaborators and partners what my vision is.
Feedback came in from my mentor on the tasks––one of particular use was to ask any would be a character designer for turnarounds in 2D.
It's not common to find people with both a modeling background and character design skillset. Characters might touch the future of the farming project in the form of embodied plants (expanded on at a later time) or a pre-cursor scene with characters walking around the farm viewed objectively.
In another realm here are some points to consider from my mentor: Who is the audience of your experience, what do you want to tell them, and what do you want them to do after trying your experience? (Audience / Message / call-to-action)
So an example is: Audience: Tech investors
Message: "I'm building a new platform for conveying ideas/concepts"
Call to action: Fund me, or Join my team, or Sign up to my newsletter email to stay in touch
As a general rule of thumb, I believe that everything is hierarchical in the brain. In VR design for users, my hypothesis is that this can be really helpful for setting the context. For example, at Virtually Live I once proposed that we use the amazing UX of Realities.io to show a globe to a user as the highest level. The user can spin the globe around and find a location to load in. The hierarchy written abstractly in this example is, Globe is a superset of Countries, Countries that of Cities, and Cities that of Places. I figured that this would be perfect for an electric motorsport championship series that travels to famous cities each month. Alas, we went a different direction with that in the end that actually is probably much more expeditious than the globe.... But of course, I say this about an old project to explain that a further blog post will come from me about my first user test and the questions I asked regarding my OLP project. The Future of Farming takes place largely in a metropolitan area, namely San Francisco. So I've decided that to begin, I'll borrow from the hierarchical plan. I want to showcase an orthographic project of San Francisco to the user with maybe a handful of locations highlighted as interactable. To do this I've setup WRLD in my project for city landscape which was really simple––I'll report back on how this goes.
Upon selection of one of the highlighted locations with the GearVR controller, a scene will load with a focal piece of farming equipment that has made its way into the type of place (e.g. Warehouse, House, or Apartment, etc.).
A quick aside, last week I had a tough travel and work schedule to New York. I came upon a pretty bare blog post upon reading back what I wrote, so I decided, it was better to not share. One of the other hurdles I had, was an unfortunate loss of the teammate I announced two weeks prior, simply due to his prioritizing projects with budgets more appealing to him. I dwelled on this for awhile, as I admired his plant modeling work a lot. With the loss of that collaborator and weighing a few other factors, I've decided to pursue an art style much akin to that of Virtual Virtual Reality or that of Superhot. Less geometry all created in VR. Doing most of this via Google Blocks and a workflow involving pushing created environments to Unity. Here's a look at the artwork for a studio apartment in SF for the app, as viewed from above. It's a publicbedroom that I'm remixing and you can see I've added a triangular floor space for a kitchen and this is likely where the window sill variety of hydroponic crop equipment will go. Modeling one such piece is going to be really fun.
In the past weeks, I've dedicated myself to edification on gardening and farming practices via readings, podcasts, and talking to people in business ecosystems involving food product suppliers. I learned about growing shitake mushrooms and broccoli sprouts in the home and got hands on with these. I learned about the technology evolution behind rice cookers and about relevant policy for farmers on the west coast over the last dozen years. In the industry, there are a number of effective farming methods that I'm planning to draw on (indoor hydroponic and aeroponic) that I can see working in some capacity in the home, and milieus I will highlight such as a legitimate vertical indoor farm facility (https://techcrunch.com/2017/07/19/billionaires-make-it-rain-on-plenty-the-indoor-farming-startup/).
I have asked for help from a design consultant standpoint from someone that works at Local Bushel.
To expound on why Local Bushel is perhaps a helpful reference point: Local Bushel is a community of individuals dedicated to increasing our consumption of responsibly raised food. Their values align well with edifying me (the creator) about the marketplace that I want to project into the future about. Those are:
1. Fostering Community
2. Being Sustainable and Responsible
3. Providing High Quality, Fresh Ingredients
------ For interactions, I can start simple and use info-cards/move scenes based on the orientation of the users head using raycasts. Working in Oculus Gear VR Controller eventually.
This week I was in Montreal for work Wednesday through Monday showing VR demos across six locations to kiddos and adults alike
Pursuant to Google Blocks artwork creation, I've started to put together locations for the experience. With Local Bushel we have a call setup for Wednesday next week. Hoping to do some user story creation with them on the urban market for sustainable and local food and what business models work best.
I've had a great time working on the project using Google Blocks––its allowing someone like me with no 3D modeling experience to get the ball rolling and learn. After you try Oculus First Touch, it's pretty tough to imagine the Future of Farming scenes without that level of interaction with any object in the scene. In order to achieve this, I had an obstacle with my workflow to get individual pieces to be interactable.
Several efficiencies were found after conferring with a more skilled artist than me regarding making discrete objects (i.e. CDs on a desk) from a scene created in Google Blocks. For flow purposes, it's much better to create a scene all in one Google Blocks session. So for example, if I have a TV remote sitting on a table it's a part of the overarching mesh that I export from Blocks. This more experienced artist showed me how to use Maya to go into the living room scene, select the TV remote using either the edge or face tool and then follow this subset of steps which is mostly the same for both.
1. Use Mesh>Extract, while focal object is selected 2. Go into object mode and select the separated geo 3. Go into Mesh>Combine 4. Select the newly created mesh 5. Modify>Center Pivot 6. Rename
With WRLD, I've found the certain long. and lats. don't offer as much in the way of buildings as other coordinates (34.236137, -77.941537) are the ones I've tried for this shot. Next week, I've got to narrow in on where the Unity World space will be set, and carve space out for my rooms. @Micah brought up a relevant question pertaining to the beginning of our experiences. Namely, should we have a static load screen or should there be a kind of load-in area where the experience starts? For now, I'm trying to make the menu scene a bit like the user is looking at Earth from a satellite and is then able to select a location.
Today you can check out a creation process video of modeling cabbage sprouts in an a-frame hydroponic water system as well as a rosemary plant. I did need Maya to construct some assets because from google blocks and tiltbrush at times one can't manipulation vertices with the control you would have with a laptop and mouse. Learned about the move and duplicate commands; and produced a few other plants there along with a small hydroponic system for a counter top (looks like this blocks rendition but a bit nicer although I was able to make this one much much quicker, https://vr.google.com/u/1/objects/7em9BhqPzoX)
Over the past week, I got a bot up and running at work that can consolidate sales and marketing data from a couple of different VR store platforms.
For OLP, I'm very much still learning as I go on the Google Blocks front, so I'm kind of importing models that are already done trying to reverse engineer how they were made. With some of the leafy greens that have lots of tiny parabolic details I struggled to keep poly counts low. I figured out that to design some plants it was better to work systematically. Start by figuring out the basic tenets for the plant (e.g. stem, soil, petals) and then model one of each. By doing this I'm able to then make a copy of the original, respectively, and make superficial edits that add a natural look (e.g. permute some vertices, re-color, mirror, transform the position).
This week I worked on interaction with the Gear VR Controller. My frame rate looks fine so far, partially due to geo being light. As I step into next week I'm hoping that I'll upload a build for Gabor to review. I'm working from Orange County this week and also felt impulses to change my project a couple of times because of another cool idea. I quelled the thought, but as soon as I'm done with this project maybe I'll put that idea in play for Gear VR. Certainly the vision of the project has changed a bit, to highlight nutrition profiles of the foods on our kitchen tables a bit more.
At the culmination of this week, I have done enough work to submit a functional build––but it's lacking in interaction points. Looks like I'll hold off on store submission of a build (though I read folks offering their wisdom to get one submitted) to tidy the environment and gear VR controller features this coming week.
As well as the learning experience has been for me, my experience is that I probably squandered my time in the middle weeks on engineering a project timeline or trying to gain team members.
Right now, I'd like to offer this concept art I put together for a wall growing system (those are mushrooms of a kind in the bottom right hexagon)
Retouching my project proposal, budget, and app as the last set of items for the program. Lastly, I'd like to point out that I don't know what I would've been making if I wasn't in this OLP program but I consider the program to have been a great catalyst for me. Thanks all.