11-25-2024 12:16 AM
Hi,
does the new Audio2Expression feature only work with audio from the microphone? I didn't find a way to feed in some other kind of audio stream. Not sure what the use case would be just using the mic, besides the existing demo? The more obvious use case would be to use it like OVRLipSync to sync any character / NPC lip movement with audio, but this does not seem possible right now?
12-18-2024 03:32 PM
*bump* I can not be the only one wanting to use this feature, so would like to know if this is just a showcase for now or if it's anyhow usable without direct mic input? I mean, what would be the use case of reacting to mic input anyway? Seeing yourself in a mirror as a speaking character? I cannot imagine any reasonable use case with current integration.
2 weeks ago
I am also interested in how to use the Audio To Expression Feature for NPC-Avatars. I just cheked out the newest version (v72) but also could not find a way to use an audio clip as input for A2E instead of the headset microphone. At the moment the only supported use-case seems to be the own vatar in multiplayer applications.
It would be helpful to know if and when the use of Audio to Expression based on other inputs, such as an audio source/clip will be supported. Is that something that is in the works? Has anayone else tried to do this?
2 weeks ago
Where did you find the A2E files? Is there a demo scene for it?
2 weeks ago
They call it Facetracking in the SDK. You can use it without facial sensors though as well (i.e. on Quest2/3). Then it is just based on the audio stream from the mic. For that you need the Meta Movement SDK; the FaceTracking Samples that you can downlowad for that package let you test out A2E with the mirrored Avatars.
Saturday
BUMP!
We need this feature to be pluggable on regular audio source components! I can't believe it's only usable on microphone input!