Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
Luandry's avatar
Luandry
Honored Guest
6 years ago

Lipsync From Other Means

Hello there, I am new to the forum, and working on a project currently for my Final Year Project in college.
I am trying to build a virtual assistant which maps the output from a dialog system to the mouth movements of a model I have designed.
Currently, I have the model performing lipsync via my microphone input to UE4.

My question is whether or not it is possible to get input to the Lipsync Via a different means.
For example, I am wondering if I can stream a .wav file into Unreal Engine and use this as the voice input to the lipsync in real time.
Or, is it possible to read an audio file as it is written to the disk, and use this as the input to the lipsync.
I am trying to figure out some way of passing in the Audio Data from potentially a network source, but definitely a different source than UE4.

Is what I am trying to do possible? Please let me know what you think, as this is the final piece of the puzzle that is this project.
Thank You in advance!

3 Replies

Replies have been turned off for this discussion
  • Hi,

    My project is on Oculus Quest and while we have realtime multiplayer VOIP working, I found that the CPU cost for doing LipSync from VOIP data was simply too expensive on Quest.

    I switched to "canned" LipSync using precomputed viseme data (essentially a list of float values). I found the Unity LipSync sample application more useful for me than the UE4 version - as it has a tool to generate viseme data from an audio asset as well as a sample precomputed viseme data file that I ended up adding to my UE4 code.

    So what you're trying to do should be possible on Rift hardware, provided that you can get your audio data as raw PCM bytes that you can feed to the LipSync system. See UOVRLipSyncActorComponent::FeedAudio for how this is done.

  • Luandry's avatar
    Luandry
    Honored Guest
    Thank you for your reply! I successfully have accessed my audio data as a ByteBuffer containing the Raw PCM Bytes, but the Feed Audio function doesn't seem to want to do anything. When I run the Start Function before/after trying this, I am just given lisync via my microphone rather than via the byte buffer. Do you have any idea as to how to use the FeedAudio function yourself in UE4? 

    I have been trying to contact the Oculus Support Team for weeks now, but still do not have an answer..
  • The mic input processing is done in OnVoiceCaptureTimer(), which is invoked continuously as a UE4 timer function initialized as part of Start(). OnVoiceCaptureTimer() reads the mic input and transforms it into the byte array format expected by FeedAudio().
    What I did for my project is remove OnVoiceCaptureTimer() entirely as well as it's initialization in Start(). Then I populated the Visemes member data directly with my canned viseme float data and finally called OnVisemesReady.Broadcast() to send this data to my Pawn driving the OVR Avatar.
    It sounds like you might be close to getting your application working as you already have the audio data as raw PCM bytes. You might have a look at the final two lines of OnVoiceCaptureTimer() and replicate something similar without accessing the microphone:
    TArray<uint8> AudioBuffer(VoiceData.GetData(), VoiceDataCaptured);





    FeedAudio(AudioBuffer);