OVRlipsync on Hololens 2
Hello, I am building an application in Unity and use OVRlipsync SDK to synchronize audio with my avatar. The SDK can synchronize audio with my avatar lips when in Unity Editor. However, when I deploy to hololens 2, the OVRLipsync does not work. I wonder if anyone can help me to troubleshooting this? I use OVRLipsyncMicInput to get the audio when I speak. the setting is attached in the picture. during the gameplay, I have also checked that my app can detect the Hololens 2's microphone and the mic control is constant speak. I am using Unity 2022.3.9f11.4KViews1like2CommentsUsing Oculus Lipsync in live mode (OVRLipSync Actor Component)
Hey there! I recently started looking at Oculus Lipsync for Unreal engine. I have downloaded the demo project with the plugin and got it working with 4.27. I have had a look around the demo project and tried to understand how it works. However I have noticed I can only get the "playback" demo to work and not the "live" demo. I have not touched anything in the demo project, the plugin works well with playback audio - I even managed to hook up my own character with no issues at all, I just cant get the live mode to work - which is ultimately what I would like to be using this for. Wondering if perhaps I need to specify a microphone or something in the config of the project but there's absolutely nothing in the documentation and I assumed it should work out of the box just like the playback demo. One last bit of information for anyone who may be able to help. In the documentation it states "When a prediction is ready, the OVRLipSync Actor component will trigger the On Visemes Ready event." I've hooked up a string and noticed this event never gets called no matter what I do in the live mode. I'd love some insight if anyone can help, Cheers!6KViews0likes5CommentsOVR Lip Sync with multiple audio sources in Unity
I am trying to set up a Ready Player Me avatar in Unity with Oculus lip sync to animate mouth movement. I have multiple audio sources for multiple audio clips because I want to be able to manually cycle through the audio clips one after another using the keyboard. However, 'OVR Lip Sync Context' seems to only work on one audio source whether I try to hold the audio sources on multiple game objects or the same one. I tried adding an 'OVR Lip Sync Context' script for each audio source but again only the first one will work (any audio sources after the first will play the audio with no animation). Does anybody know a way around this?2KViews0likes2CommentsWhy does the Oculus Lipsync for Unity Plugin not support the Ios App Store?
I built an Ios App of Oculus LipSync through Unity,but it won't be available in the IOS App Store. Why does the Oculus Lipsync for Unity Plugin not support the Ios App Store? Now is there a version that supports Ios App Store?3.6KViews0likes5CommentsCan't find OculusPlugin.dll when using OVRLipSync on MacOS
I tried to use OVRLipSync on unity. I’m using MacOS now. I got the error below. I already confirmed scripting backend is set ILCPP2. I know OculusPlugin.bundle is included in Assets/Oculus/LipSync/Plugin/MacOS. Does someone know solutions to fix this? Please let me know if you can. DllNotFoundException: OVRLipSync assembly: type: member:(null) OVRLipSync.Initialize () (at Assets/Oculus/LipSync/Scripts/OVRLipSync.cs:267) OVRLipSync.Awake () (at Assets/Oculus/LipSync/Scripts/OVRLipSync.cs:217) DllNotFoundException: OVRLipSync assembly: type: member:(null) OVRLipSync.Initialize () (at Assets/Oculus/LipSync/Scripts/OVRLipSync.cs:267) OVRLipSync.CreateContext (System.UInt32& context, OVRLipSync+ContextProviders provider, System.Int32 sampleRate, System.Boolean enableAcceleration) (at Assets/Oculus/LipSync/Scripts/OVRLipSync.cs:309) OVRLipSyncContextBase.Awake () (at Assets/Oculus/LipSync/Scripts/OVRLipSyncContextBase.cs:113) OVRLipSyncContextBase.SetSmoothing: An unexpected error occured. UnityEngine.Debug:LogError (object) OVRLipSyncContextBase:set_Smoothing (int) (at Assets/Oculus/LipSync/Scripts/OVRLipSyncContextBase.cs:69) OVRLipSyncContextMorphTarget:Start () (at Assets/Oculus/LipSync/Scripts/OVRLipSyncContextMorphTarget.cs:110)2.3KViews0likes3CommentsUnity Oculus LipSync Demo is not available on the Ios App Store
Hi,all I built an Ios App of Oculus LipSync through Unity,but it won't be available in the IOS App Store. And it replied:ITMS-90426: Invalid Swift Support - The SwiftSupport folder is missing. Rebuild your app using the current public (GM) version of Xcode and resubmit it. What is the cause of this problem? And how can I fix it? I hope to get some advice and help. Thanks.1.3KViews0likes1CommentWhy does the Oculus Lipsync for Unity Plugin not support the Ios App Store?
I built an Ios App of Oculus LipSync through Unity,but it won't be available in the IOS App Store. Why does the Oculus Lipsync for Unity Plugin not support the Ios App Store? Now is there a version of Oculus LipSync for Unity that supports Ios App Store?1.6KViews0likes0CommentsWhat could be causing my canned lipsync to fail?
I'm struggling to get canned lipsync to work in Unity and would welcome any thoughts. I have a character model, with animator, audio source, etc. Its audio source has an OVRLipsyncContextMorphTarget, and the scene has an OVRLipSync helper object. If I add an OVRLipSyncContext component, it works fine. So the morph target etc is set up ok. The problem is that I need to use canned visemes for performance reasons. So, I... create a viseme file from an audio clip replace the OVRLipSyncContext component with OVRLipSyncContextCanned, and assign the viseme file from 1 to its currentSequence field. But when I play the audio clip, I get no lip movement at all. A couple of things I've tried: I've checked the viseme file: it's the right length and has viseme values which are large enough to produce visible movement. I've added logging to OVRLipSyncContextCanned to verify that it is indeed running and copying the frame info in the Update loop (it is). This all appears to be set up the same as the similar object in the Oculus LipSync demo scene. Can anyone think of any gotchas that I might have missed?808Views0likes0CommentsPrecomputed/Canned Lip Sync: "With Offline Model" option
When generating a precomputed viseme asset, in Unity, from an audio clip, there are two menu items under "Oculus/Lip Sync": "Generate Lip Sync Assets" "Generate Lip Sync Assets With Offline Model" The documentation only mentions the first of these. https://developer.oculus.com/documentation/unity/audio-ovrlipsync-precomputed-unity/ So, what does the "With Offline Model" part mean? When might (or should) one use it?745Views0likes0Comments