Forum Discussion
Woody3D
7 years agoExpert Protege
Hyper optimization technique for Quest for facial/head/eye performance animation.. ?
I have just one question really, If I have a character with perfect facial rigging (over 70 face bones) and I animate the face (in 3ds Max) so that:
Frame 0 = Base Pose
Frame 1= Viseme AA
Frame 2= Viseme EE
etc etc
Frame 25=Smile
Frame 26=Blink
etc.
How do I turn that into anything in Unreal? Is there any way to 'capture' the relative positions of the face bones at frame 1 or 2 etc, and call that a 'preset' and put it on a slider? In 3ds max I invented a technique where I animate faces in real-time with my Wacom Pen using the Motion Capture utility with 'pose-o-matic' (which does allow me to 'capture' the 'relative positions' of a collection of bones on a slider and (it looks just like Morpher Etc). I'm able to turn my keyframed visemes/expression into usable tools, sliders, *then animate the sliders in real-time against an audio playback, using motion capture and my pen, in order to build up layers of complex facial animation (and head and eye movement ), BUT I can't find a way to emulate this powerful process in Unreal,.
I know I could go to each keyframe and output a morph target, but my whole idea was to be able to do all animation with only bones (for Oculus Quest) and in that way save the 20-50+ Morph Targets usually required,. Maybe just a couple for forehead wrinkles here and there,.. BUT with my Visemes in successive Keyframes on the Max timeline, I am uncertain how to use that in Unreal!
I don't even know what to google!! 'converting visemes in keyframes on a timeline, to sliders for realtime motion capture using Motion Controllers' ???

THANK YOU!
Robert
Frame 0 = Base Pose
Frame 1= Viseme AA
Frame 2= Viseme EE
etc etc
Frame 25=Smile
Frame 26=Blink
etc.
How do I turn that into anything in Unreal? Is there any way to 'capture' the relative positions of the face bones at frame 1 or 2 etc, and call that a 'preset' and put it on a slider? In 3ds max I invented a technique where I animate faces in real-time with my Wacom Pen using the Motion Capture utility with 'pose-o-matic' (which does allow me to 'capture' the 'relative positions' of a collection of bones on a slider and (it looks just like Morpher Etc). I'm able to turn my keyframed visemes/expression into usable tools, sliders, *then animate the sliders in real-time against an audio playback, using motion capture and my pen, in order to build up layers of complex facial animation (and head and eye movement ), BUT I can't find a way to emulate this powerful process in Unreal,.
I know I could go to each keyframe and output a morph target, but my whole idea was to be able to do all animation with only bones (for Oculus Quest) and in that way save the 20-50+ Morph Targets usually required,. Maybe just a couple for forehead wrinkles here and there,.. BUT with my Visemes in successive Keyframes on the Max timeline, I am uncertain how to use that in Unreal!
I don't even know what to google!! 'converting visemes in keyframes on a timeline, to sliders for realtime motion capture using Motion Controllers' ???

THANK YOU!
Robert
1 Reply
Replies have been turned off for this discussion
- NeontopHeroic ExplorerI think you have to look for the Oculus Hand template which is included
in the Oculus distribution. Hands are animated with bones and translated
in gestures poses.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 5 years ago