MetaVoice SDK
I’m working on a VR application where I’m using the XR Interaction Toolkit for interactions. I have a scene where, on a button click, text is revealed and AI-generated voice is played using Meta Audio SDK TTS (Wit.ai). Everything works perfectly in Play mode — I can hear the text-to-speech audio without any issues. However, after I build the application and run it in the headset, the audio does not play at all. I need help understanding why this is happening and how I can fix it.36Views0likes1CommentMeta XR Audio Plugin + FMOD + Unity
I'm trying to get the Meta XR Audio plugin working, following along with the guide at https://developers.meta.com/horizon/documentation/unity/meta-xr-audio-sdk-fmod-req-setup/ I've been able to get it working in FMOD Studio itself, but I'm having issues with the Unity setup. The Meta docs state: For any Unity Project you hope integrate the Meta plugin into, you must also copy the Meta libraries into the Unity project. So I've copied the files such as `libMetaXRAudioFMOD.so`, `MetaXRAudioFMOD.dll` and `MetaXRAudioFMOD.bundle` from the meta package to the respective folders in my Unity project (e.g., Assets/Plugins/FMOD/platforms/mac/lib). It's not working though on my Mac, as FMOD is not able to read the .bundle file. Specifically it says ERR_FILE_NOTFOUND when it tries to load the file, although I've verified it's at the right location. I posted about it on the FMOD forums and the support there said that the `.bundle` file is not code signed, and it might need to be in order to load. I tried Steam Audio library, and I noticed their .bundle file is code signed and it loads fine. Perhaps this is the issue with the Meta package, or perhaps it's another issue. Either way FMOD support suggested that the problem is with the Meta package. I would appreciate if someone from Meta could take a look at this, and if anyone else has successfully gotten Meta XR Audio SDK + FMOD + Unity working.57Views0likes2CommentsUniversal HRTF
Dear Meta dev team, Could you please consider releasing an ambisonics to binaural VST/AAX plugin with the Universal HRTF used in Meta Audio SDK so that we sound designers can monitor in our DAW through the same HRTF as the SDK? Alternatively, could you please make your Universal HRTF available for download in the SOFA format so that we could load it in our favorite binaural decoder? The Meta Audio SDK is great overall, but it's really frustrating to sound design in a DAW with a completely different HRTF than the one used for the rendering with the SDK. Thanks a lot for considering this. Best, ComeSolved67Views0likes1CommentLow-latency Mac audio into Unity on Meta Quest 3
Hi everyone, I’m trying to get low-latency audio from my Mac into a Unity app running on Meta Quest 3. The goal is to stream either Logic Pro output or general Mac audio wirelessly or via USB-C, synced to VR content. I’ve tried: UDP streaming (too much latency, 1+ second jitter) Unity Native Audio Plugins (too outdated / build issues) Oboe (C++ plugin, build fails on Mac/Unity) I’m looking for a reliable way to receive Mac audio in Unity on Meta Quest 3 with minimal latency. Has anyone successfully done this? Even if it’s just the system audio, not necessarily Logic Pro. Any advice, plugins, or setups that actually work? Thanks!25Views0likes1CommentUsing Phonemes in TTS with Meta Voice SDK: Wit.ai, Custom Models, or ONNX in Unity?
Hi all, I'm working on a Unity project where speech technology is central, and I'm facing a hurdle with Meta's Voice SDK. My primary need is to use phonemes directly for text-to-speech (TTS), but I've found that Wit.ai does not support direct IPA (International Phonetic Alphabet) input or return phoneme-level control for TTS. Questions & Discussion Points: Is there any way to use Wit.ai for phoneme or IPA-based TTS, or is this currently unsupported? Are there recommended approaches to integrate speech models based on self-supervised learning (like wav2vec 2.0, HuBERT, or WavLM) with Unity, either alongside or instead of Wit.ai? For complete control over TTS—especially for phoneme-level synthesis—would it make sense to bypass Wit.ai entirely and run a model (converted to ONNX) for inference directly inside Unity? Have others run into similar limitations, and if so, what workflows or toolchains have worked best for you? I’d appreciate any advice/examples for integrating more advanced or flexible TTS pipelines into Unity, especially those compatible with IPA/phoneme input or utilizing state-of-the-art self-supervised models. Thanks!Solved57Views0likes1CommentMeta Audio SDK 76.0 Unreal 5.5 - Impossible to bake geometry
I tried the new version 76.0 because it failed with the previous one but I have exactly the same issue. The plugin doesn't bake the geometry, I have no data when I click on bake I tried with static mesh actor, Blueprint with static mesh component, set the mesh as a child of the XR geometry or with different mesh and it doesn't change anything. The acoustic ray tracing in project setting is enabled. Maybe I miss something but I don't understand what, if someone has any idea? Thanks for helpingSolved859Views0likes2Comments