Showing results for 
Search instead for 
Did you mean: 

Using Oculus Lipsync in live mode (OVRLipSync Actor Component)

Honored Guest

Hey there!

I recently started looking at Oculus Lipsync for Unreal engine. I have downloaded the demo project with the plugin and got it working with 4.27. 
I have had a look around the demo project and tried to understand how it works. However I have noticed I can only get the "playback" demo to work and not the "live" demo. I have not touched anything in the demo project, the plugin works well with playback audio - I even managed to hook up my own character with no issues at all, I just cant get the live mode to work - which is ultimately what I would like to be using this for.

Wondering if perhaps I need to specify a microphone or something in the config of the project but there's absolutely nothing in the documentation and I assumed it should work out of the box just like the playback demo.

One last bit of information for anyone who may be able to help. In the documentation it states "When a prediction is ready, the OVRLipSync Actor component will trigger the On Visemes Ready event." I've hooked up a string and noticed this event never gets called no matter what I do in the live mode.

I'd love some insight if anyone can help,



Honored Guest

4.26 seems to work with the existing demo

4.27 errors with:

LogVoiceCapture: Warning: No voice capture device Default Device found.
LogOvrLipSync: Error: Can't create voice capture.


So I am running into the same problem. I am thinking of moding the code so one can "Select" the voice capture device since it is hard coded.



Hey Steve!

After posting this I kept playing around for a couple days and *think* I have a solution. I don't know how efficient, safe or long-lasting this solution is because this is the first time I've needed to alter any cpp for my project. Your solution allowing you to select a voice capture device sounds much better then what I have - but here's what I did so far.

In this directory of my projects OVR plugin: "projectname\Plugins\OVRLipSync\Source\OVRLipSync\Private"
I edited the "OVRLipSyncLiveActorComponent.cpp" file.
You were right about it not detecting a default device and then pooping itself.

after the includes at the top of this file it used to read 
#define DEFAULT_DEVICE_NAME TEXT("Default Device")

I then removed the #ifndef and #endif and changed it to this

I'm not sure if I needed to remove the #ifndef and #endif - I think the ("Default Device") is what was messing it up. After recompiling the source code of my project it seems to just detect my windows default input device now. Like I said a solution like yours would be better allowing to set what device you want but I don't know how I would go about doing that.. best of luck!!

Very cool !!! Thanks for taking the time to fix this. I just started recompiling the OVR examples and making changes, so I may take a stab at adding a property to the component the lets one add text for the device. 

Honored Guest

Did u guys ever managed to get it work in realtime?

Beside the change suggested by am-Rory, you need to edit DefaultEngine.ini and change the bHasVoiceEnabled property to True:


That enables the real-time voice capture and face animation.