I tend to do a heap of different voices - maybe 3 or 4 - for characters in one scene. So I want to record all the audio on Logic then export only one character's voice at a time and use character animator to animate the puppet.
For example, X, Y and Z are having a conversation, I record all 3 parts, edit the audio so it's tight and useable, and then export an individual character's take.
I take this audio track into character animator somehow and then get Ch to animate the mouth based on the audio, and then go through a scene adding puppet movement like triggers, blinking etc from the webcam.
Can I do this? I can't get my tiny mind around it. Help me, Adobe. You're my only hope
I'd like to know how to do this too. The only thing I can think of right now is to play the audio track through the speakers and hope that Ch doesn't pick up any other noise along with it (difficult in my office, where construction is going on nearby)
EDIT: I figured this out; it's pretty simple:
Once it's done computing, you can turn on the facial capture and record some motion while listening to the audio. It won't animate the lip-sync while you're recording, but it will on playback afterward.
Yep. At this point, you'll want to use "Compute Lip Sync from Scene Audio". For the time being, this works on all of the audio in the entire scene, so if you have 3 characters, you'll need to run the process 3 times for 3 puppets with different audio in the scene each time.
In the future, I hope it will be a little easier, but for now, the workaround isn't so bad.