cancel
Showing results for 
Search instead for 
Did you mean: 

Latest screenshot of our DAW plug-in

petergiokaris
Protege
Just wanted to share the latest pic of our Oculus Spatializer plug-in, running in Ableton Live. This will be available in our next SDK release.
Peter Giokaris Senior Software Engineer
9 REPLIES 9

cybereality
Grand Champion
Looks awesome!
AMD Ryzen 7 1800X | MSI X370 Titanium | G.Skill 16GB DDR4 3200 | EVGA SuperNOVA 1000 | Corsair Hydro H110i Gigabyte RX Vega 64 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV

SAOTA
Honored Guest
Fellow Ableton user here.

I am so excited for this new wave of audio plugins about to hit this industry.

VSTs and plugins galore.
Bring it on. Looking great.

Anonymous
Not applicable
Looks cool.

PlopZero
Honored Guest
If its not too late to ask for early well documented features : Must be DAW Independent , Developer Controlled preferences that provide a Gradation Pan Mode feature of at least 96 or unlimited simultaneous audible Audio Sources , as in In Unity 5 - that would be 1 Audio Listener and 32 Audio Sources audible simultaneously , that smoothly move , making transitions between varying values sound more natural between different Pan positions using GUI/Visually apparent transitions that directly report real-time data using Artist Driven preferences with a minimum of 6 Gradation Curves option , with a sleek and cool look !
Elevation and Azimuth data should be equally represented as well , as part of the Oculus Spatializer
Ambiance Engine that lets the Developer graphically control the distance , position and size of each part within a 360-degree sound field allowing for a earlier Development to Market transition .
As always , with the End User in Mind ,
Thank You

xfghsfgh
Honored Guest
Looks awesome! Some questions.

I assume this would go on the master channel? Could you wear the rift and test in real time?

DaveDriggers
Adventurer
Hi xfghsfgh,

The plugin should be placed on an audio or MIDI track and used only when a single monophonic voice is playing on the channel. You could place it on the Master, but only if you have just one track in your project playing a mono sound - so that's probably not the way you want to go. If you want to spatialize multiple sounds simultaneously, you'd need separate tracks, each with their own instance of the plugin.

To test in real-time, you would need an Oculus-enabled 3d application. If you haven't checked out our Unity integration yet, that might be a good place for you to start. It would be easy to swap out the sound in our demo scene with your own, so you could hear your sound in a 3D, interactive environment.

Cheers!
dave driggers | audio programmer | Oculus VR

xfghsfgh
Honored Guest
OK thanks but excuse the n00b questions...why would I use this in Unity when Audio assets can be applied to objects within the environment, and then those sounds processed to be spatialised?

Could this be used for video applications - to move the voice of an actor moving across a scene for example? This is the thing I really want to crack!

DaveDriggers
Adventurer
Sorry if I don't quite understand your question - if you're just using Unity out of the box (without our plugin), you won't get real 3D spatialization, you'll only get native L/R panning. Most game engines and middleware (incorrectly) refer to L/R panning as "3D", so don't let that confuse you. 😉

Yes, video tracking could very well be a use case for the VST plugin. You could use it to manually move the position of a particular sound through a scene to sync up with the video. Keep in mind though, the HRTF algorithms used by the plugin are designed for the listener to be wearing headphones, and you won't get correct-sounding spatialization otherwise.
dave driggers | audio programmer | Oculus VR

xfghsfgh
Honored Guest
Great! This is good for me. I'm an Ableton user and have been playing with the plugin and it's really great so well done!

I'm imagining the process of sound designing for video VR in Ableton. I guess I would have to use an unwrapped, stitched video, in the video channel, but it wouldn't be ideal.

Imagine a scene in which there is a 5m square room and the 360 camera placed in the centre. An actor enters the room through a door, and walks around the perimeter of the room, talking all the time. I have recorded the actor when filming with a lavalier mic so have the mono audio asset of their voice.

The two things I can't get my head around in this situation are:

1) How would I view the video in such a way that I know where the person is in the room - obviously DAWs dont support 360 video (yet!)

2) What would I export and how would I prepare the file(s) to be used on Gear VR (or DK2)?

Advice appreciated.