The facial expressions api and Meta avatars are great (quest pro) but they look wrong when the user is talking and saying SS, SH, TUH (and other voiceless articulatory phonetics) which happens a lot in speech, which makes the avatar face look wrong during speech.
Idea: use the quest pro’s microphone to listen for these phonemes that require the teeth to move together, and pass medium confidence blendshapes for the teeth (and maybe tongue also) through to the Avatars API for developers to map to a face rig alongside the existing facial expressions blendshapes.
This would make social presence with avatars a lot closer to feeling real!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.