cancel
Showing results for 
Search instead for 
Did you mean: 

Oculus Medium Suggestions

slipgatecentral
Explorer
TL&DR: Medium is a very nice toy, but it's hard to be productive in it. 

After playing for few hours with it, I have some ideas that might improve this software for professional artists, without ruining accessibility for beginners.

-Almost very tool needs an alt modifier, that will allow user quickly switch to alternate mode. Example - for clay tool, to cut holes you need to open menu, select "Erase", and close menu. Needs to be faster, preferably holding modifier button on left controller (like the one we got for alternate tool). Every sculpting 3d software has this - primary function can be switched instantly to it's opposite.

- We must be able to select same tools for primary and alternate, but they must not share settings. This will be extremely useful in many cases. Like, you can paint using 2 different brushes with different color.

- Desperately need move/pull tool. Smudge doesn't do much. If there's no room on the palette I suggest removing useless swirl.

- Need an option to customize brushes and my palette. Would be great to have my own selection of saved tools with different settings. Time saver.

- Need bigger color swatch accessible from menu. Right now only 3 colors in it.

- Eyedropper tool takes too long to select. It also needs a button modifier for quick operation, like Alt key in Photoshop.


I am a professional 3d artist and I'm looking forward to work in VR. Those are just first steps but I see tons of potential in VR sculpting. 
323 REPLIES 323

RaptureReaper
Explorer
Once you guys bring in masks and alphas, you'll pave the way to reaching zbrush territory 🙂

hughJ
Protege

Fluorescent light tubes (light sources that are long ovals), useful for things like starship running lights, neon, etc.

Better yet, convert stamp (or layer) to light source.  [Not sure if this is possible, but would essentially make the object invisible and ultra-emissive in the same way that the point light is, drawing color and/or dispersal source based on the stamp or layer].  Would be cool for all sorts of things, city "window" illumination, etc

Middle ground: A "draw-able" intensity controllable light source, maybe built off the capsule or sphere. 
I think arbitrarily shaped area light sources tends to make things pretty complicated -- AFAIK it boils down to having to calculate all the intersection tests across the surface of the light source and everything in the scene in order to produce an appropriate penumbra.  Any renderer that supports this these days is probably using a variant of path-tracing.  Directional lights, spotlights, and point light sources are simplified approximations (hacks), which is why they're able to be used in real-time rendering.

martinegail
Honored Guest
Nice to know about medium functionality. We will learn some new from it.

Anonymous
Not applicable

hughJ said:

I think arbitrarily shaped area light sources tends to make things pretty complicated -- AFAIK it boils down to having to calculate all the intersection tests across the surface of the light source and everything in the scene in order to produce an appropriate penumbra.  Any renderer that supports this these days is probably using a variant of path-tracing.  Directional lights, spotlights, and point light sources are simplified approximations (hacks), which is why they're able to be used in real-time rendering.


Interesting.  Are there other lighting things besides those which have potential to be used in real-time?

hughJ
Protege
Nothing that I can think of.  For the most part real-time boils down to point-based approximations of light emitters, or variations on that.

In the context of Medium I would think the feasible route for them would be to have a fancier "photo" rendering mode (sort-of akin to what you see in Forza/GranTurismo and others) where you sacrifice some interactivity and speed in order to give it more time to generate beauty shots.  Granted that in VR your head/POV is always in motion, so you can't really do a long multi-pass/accumulation like you can with a fixed camera, but if the objects in the scene are fixed for a period of time it would at least allow you the time to bake certain things (shadows and diffuse/lambert shading). 

...Or you could simply limit that beauty photo mode to the hand-held camera, which would allow you to make the camera fully fixed and give it however many seconds to generate an image.  There's a bunch of different ways you could do this really, everything from merely compositing a bunch of frames from their existing renderer (similar to what people used to do here with photoshop to get multiple light source renders in Medium), or utilizing an altogether different psuedo-realtime renderer.

I think I've said this before somewhere, but I feel like using Medium can be boiled down into 3 distinct steps:  sculpting -> painting -> rendering.  Each of those things has unique data+algorithmic needs that probably ought to be a distinct mode.  Sculpting benefits a ton from their voxel engine, whereas painting is way more suited to polygonal meshes with proper texture mapping, and rendering ought to be geared more for fidelity at the cost of interactivity.  Personally I'd rather sacrifice some ease of being able to seamlessly jump back and forth between sculpting, painting and rendering if it meant having much more useful implementations of each step, and avoid having to spend more time in Zbrush/Maya/Blender than you do in Medium in order to get very polished results. 

It's a bit of a bummer that 9 times out of 10 when you see a very good looking Medium sculpt online, much of what people are impressed by was neither done by Medium, nor even possible to do in Medium.  Moreover, I worry this has a knock-on effect of discouraging new users once they realize this because the implication is that you need to have 3rd party professional non-VR tools to get results that don't look like colored mashed potato sculptures.

P3nT4gR4m
Consultant
@hughJ I've felt the same way about paint for quite some time. Once they have the UV unwrapping sorted out, using the mesh for paint operations would definitely be the way forward even if, like you said, you have to lock it down from editing, it'd be useful as hell to be able to full-suite the diffuse map at least. Maybe even throw some normal painting in there.

Anonymous
Not applicable
@hughJ Seems like a still frame long render might be possible as a mode.  Alternately, maybe do something like the way tilt brush does and save the key data to perform an "out of vr" high resolution render.  I suspect that we'll see some of the polygon meshes at some point in the future, although it's hard to say when that will happen.  Seems like they are building a framework that has the approach in mind.

Hopefully an integrated vr pipeline emerges, where each area can be specialized to do what it does best and the data can be easily transferred between stages.  I suppose eventually we'll get an all in one package of some kind, but that's going to require more power

hughJ
Protege

@jessicazeta
Not that you folks owe us this, but I'd be interested in hearing some insight on what your guys' mid-term and long-term internal development roadmaps are like for Medium.  Not necessarily as comprehensive as a Trello board, or Carmack-style .plan log or anything like that, but maybe just a 'state of the union' style overview a couple times a year?  Obviously there's a spectrum of who your users are, how they utilize Medium, and that dictates what sort of feature additions and fixes they desire.  Similarly, I'd imagine that within your own studio you all have your own thoughts that tug in different directions.  

What is Medium to you guys?  Is it popular (active users) relative to other Oculus applications?  Has it been growing?  Do you need more evangelism from your users?  How big is the team working on it, and how committed is Facebook to a PCVR-based art tool?  What expectations should your users have for its future?  Is Medium more likely to streamline features in an attempt to become cross-platform with low-powered mobile devices, or is there desire to continually expand features and bridge the gap between itself and professional CAD tools?  

Presumably at some point Pixologic, Blender, and/or Autodesk are going to integrate their own VR support into their portfolio of tools, so I'm curious what that prospect represents to Medium's development.  Does that prompt you guys to consider things like adding an API/SDK for 3rd party plugins, or perhaps even spinning yourself off to become a plugin for other CAD tools?  

Consumer VR seems to be at a cross-roads right now as it tries to find a business model that satisfies the desire for both growth and sustainable profit, and Medium (imo) seems to exist in a weird position between the direction of affordable mainstream mobile VR (Quest), and the world of enterprise/professional and enthusiast/hobbyist digital artists that spend thousands on workstations, cintiqs, ipad pros, and zbrush licenses.  I guess there's an untapped 3rd market segment in there if a RiftS/Rift2 were to achieve next-gen XBox compatibility -- Medium could be a big deal for a mainstream platform like that.

Anonymous
Not applicable
A direct link to Autodesk Revit, a major architectural modelling application.
https://www.autodesk.ca/en/products/revit/overview

If we could link and update Revit models in Medium, sculpt and then send simplified version of these models back to Revit, it would create a revolutionary platform for architectural design. Right now, there are no spatially intuitive ways to models architectural concepts in VR and this would be a fleshed out solution for the conceptual phase.

It is possible to do this at the moment (bring Revit model to VR, design in Medium and bring back to Revit) , but the process is not automated and requires semi-specialized knowledge. Alignment of the geometry within Revit is also not straightforward.

A direct link to Revit would be extremely beneficial to the architectural design industry.

Octops
Explorer
I recently went back into Medium after not touching it for a long time, giving me a fresh set of eyes and here's a few things I ran into.

-Transparent material. I used a reference image, and only having an opaque material made it hard to trace.
-Scrolling with the thumb stick in the file/stamp browers. It makes sense and is just intuitive.
-The cut tool has no steady stroke, which leads me to the next point.
-The brush constraints should be global switches, not per brush. Perhaps the constraints and snaps could be in the same location, as they are closely related.
-Switch layer by pointing at the layer piece and clicking a button, not having to go into the menu.
-Random color per layer visualization toggle.
The surface constraint is wonky. Should optionally follow the normal of the surface, and a brush depth setting is needed.

And some tools and features I'd like to see:

-Primitives, as in parametric shapes. Having 50.000 stamps doesn't make sense. Having a dozen very flexible primitives does. You could have that when you press left/right on the right thumb stick which allows you to edit a tool,
 handles show up on primitives, allowing you to adjust them with your left hand.
For non primitive stamps this could be used for non uniform scaling.

-Another line tool that works "point to point", i.e click A is registered, and when you place click B, a line is formed from A to B. Between clicks you could change size and even stamps for very controlled tapers and complex profiles.

-Multicopy/Array. Just a sample rate setting for the brush would be a quick and dirty option to do this as a low sample rate would result in copies along your stroke rather than a continuous line.

-Some gesture based controls. Have you experimented  with something like the hotbox in Maya or similar systems?
https://blenderartists.org/uploads/default/original/4X/f/5/5/f552f610c98b5d3e85a4669ad8ca5d3c33347a7...
Maybe you can delete a model by grabbing it and throwing it away or shaking it really fast for example. Use the medium at your disposal. Just having floating screens for your controls seem a bit conservative.

-A lattice tool
-Of course masking and alphas.