OVR Lip Sync with multiple audio sources in Unity
I am trying to set up a Ready Player Me avatar in Unity with Oculus lip sync to animate mouth movement. I have multiple audio sources for multiple audio clips because I want to be able to manually cycle through the audio clips one after another using the keyboard. However, 'OVR Lip Sync Context' seems to only work on one audio source whether I try to hold the audio sources on multiple game objects or the same one. I tried adding an 'OVR Lip Sync Context' script for each audio source but again only the first one will work (any audio sources after the first will play the audio with no animation). Does anybody know a way around this?2KViews0likes2CommentsWhat could be causing my canned lipsync to fail?
I'm struggling to get canned lipsync to work in Unity and would welcome any thoughts. I have a character model, with animator, audio source, etc. Its audio source has an OVRLipsyncContextMorphTarget, and the scene has an OVRLipSync helper object. If I add an OVRLipSyncContext component, it works fine. So the morph target etc is set up ok. The problem is that I need to use canned visemes for performance reasons. So, I... create a viseme file from an audio clip replace the OVRLipSyncContext component with OVRLipSyncContextCanned, and assign the viseme file from 1 to its currentSequence field. But when I play the audio clip, I get no lip movement at all. A couple of things I've tried: I've checked the viseme file: it's the right length and has viseme values which are large enough to produce visible movement. I've added logging to OVRLipSyncContextCanned to verify that it is indeed running and copying the frame info in the Update loop (it is). This all appears to be set up the same as the similar object in the Oculus LipSync demo scene. Can anyone think of any gotchas that I might have missed?808Views0likes0CommentsPrecomputed/Canned Lip Sync: "With Offline Model" option
When generating a precomputed viseme asset, in Unity, from an audio clip, there are two menu items under "Oculus/Lip Sync": "Generate Lip Sync Assets" "Generate Lip Sync Assets With Offline Model" The documentation only mentions the first of these. https://developer.oculus.com/documentation/unity/audio-ovrlipsync-precomputed-unity/ So, what does the "With Offline Model" part mean? When might (or should) one use it?745Views0likes0CommentsLip Sync Sequences stop working until rebuilt
Hey guys. I'm using UE4. Not sure if I should have posted this in Unreal Development or Avatar Development.. but this is specific to UE4 so Unreal Dev seems right. Anyway, my issue is that the lip sync will stop working when the project is closed. So it'll work great, close project, open it again, no lip movement. The issue is the sequence. If I right click make a new one, it works. But obviously remaking all my sequences every time I open the project is not going to cut it. I'm using UE4 4.22.3 and the latest lip sync plugin released a few days ago. Any ideas what I can do here? Edit: https://streamable.com/vu5b13.2KViews0likes7CommentsNeed help with integrating Lip-Sync in Unity
Hi, I tried integrating lip-sync using Oculus integration from the Unity Asset Store, following instructions from the lip-sync guide but it doesn't work on my instantiated character until my app loses focus. It starts working after my app loses focus (if i alt-tab back into the app) but then it's messing up with my microphone and Photon Voice stops working. Any help would be appreciated, thanks.756Views0likes1Comment[UE4] Will there be any updates coming to Oculus Lip Sync plugin?
Hello. First, I just wanted to say great job Oculus on the lip sync plugin. It works amazingly, the results are fantastic and it's easy to use. It's been a game changer for our project. However, there are some limitations. For example, the canned version, the lip sync sequence must be set in editor, and can't be changed at run time. As in, you can only get one voice line out of it. I'm assuming on your end it'd be relatively easy to create a function to change this at run time, IE "Update Lip Sync Sequence", but it's slightly beyond my skills as I work only with UE4 blueprints. Is this something I can expect an update for, or should I instead be digging into the sdk and learn how I can set this up in C++? Thanks for any info you can provide. And thanks again for this plugin anyone involved, it's great. Edit: Oh, also, does anyone know anything about getting the Laughter detection from the live detection into the canned system? It works great, but it'd be really nice to have it in the canned version as well.1.5KViews0likes3CommentsUsing Lip Sync for Avatars SDK in Unity
Hello, i'm using the Avatar SDK and Photon for a multiplayer VR experience. I already got the Avatar SDK to work. But now I want the avatars to move their lips while talking over VoIP. My current script looks like this: void OnAudioFilterRead(float[] data, int channels) { avatar.UpdateVoiceVisualization(data); } The filter receives the data from the audio source, but the lips just won't move. Sometimes I get a InvalidOperationException: InvalidOperationException: Collection was modified; enumeration operation may not execute. System.ThrowHelper.ThrowInvalidOperationException (System.ExceptionResource resource) (at <f2e6809acb14476a81f399aeb800f8f2>:0) System.Collections.Generic.List`1+Enumerator .MoveNextRare () (at <f2e6809acb14476a81f399aeb800f8f2>:0) System.Collections.Generic.List`1+Enumerator .MoveNext () (at <f2e6809acb14476a81f399aeb800f8f2>:0) OvrAvatar.Update () (at Assets/Oculus/Avatar/Scripts/OvrAvatar.cs:608) Anyone got any hints in how to get this working?2.2KViews0likes3Comments