Unfortunately I can't answer your second question, only the first one.
Character Animator animates the mouth based on the content of what you're talking, so it analyzes the sound. Only 3 default mouths are applied based on the camera: surprised, neutral, smile
What you can do is assign keystrokes to the phonemes (this is what the Simpsons people did if I understood this correctly) and then let your puppet talk by pressing the keystrokes.
You can also record audio (or have someone record the talking for you) and then load it into Character Animator and use it as the sound source.
Hello Monika and thank you for your prompt reply!
I understand the method of changing the mouth images by using triggers, but it was the use of the tracking that I was more interested in. A lot of mouth shapes are made without words, although I know you could add this as extra images, but this adds another step to the process... assigning triggers!?
When you zoomed into the Camera & Microphone Panel... you can clearly see all the tracking dots around the mouth ( made up of two lines) and how they track the movement of the mouth really well, but it seems that only 3 variables are used; are they in line together, apart, or are the end tracking dots wide. It looks as though there is a lot more that could be done with these... hopefully in the future!?
I used the term Visemes as this refers to mouth shapes made and one viseme/shape can be used for several phonemes as used in Character Animator use.
Thank you again
1. The louder and clearer the mic signal is without clipping, the better. Note in Lip Sync there is an option for "Keyboard Input" - if this is on, then when you press the first letter of each of those mouth sounds, it will trigger them. For extra stuff like frowns, yells, etc - we do recommend making these their own custom key-triggered layers. And Monika is correct about the current 3 webcam-controlled behaviors. We may add more in the future, but we have found that many users prefer the reliability of key triggers (press F to frown as opposed to frowning to trigger and not seeing it if the lighting/angle/etc is off).
2. You should be able to hear previous recordings, so it's possible the input/output settings may be not set up correctly. Check Character Animator > Preferences to edit audio preferences. If I record audio I listen via headphones so it won't get picked up on the microphone and add extra noise.
You can also adjust the "Mouth Strength" parameter in the Face behavior to use the shape of your mouth on camera to change the puppet's mouth size. By default, it's set to 0 (off).
Hi Dave and thanks for the full reply!
1. Live audio is a bit of problem, but the suggestion you and others have given has helped.
I was fascinated that the imported audio file worked well and I was left wondering how the lips are synced based on an audio signal.... but I suppose that is down to the algorithm created.. in simple terms.. a particular sound triggers a particular mouth shape!? Is the same method used for imported audio as for live audio?
I suppose I was thinking/hoping, that there was an algorithm used by the Tracking dots around the mouth, that recognized when a particular shape was formed.. a corresponding viseme/mouth-shape would be triggered!? Maybe an idea for the future?
2. You were spot on! I have to plug myself in to hear the sound and use an external Focusrite audio interface, but the settings had selected the default, Built-in Output. I changed the settings and it's working great now!
Hello Victoria... and thank you for taking the time to reply!
I opened up Seth... one of Dave's Character to try out your suggestion, but although I tried this via the Puppet and Scene... I didn't notice any difference.. even when I went to 500%. I must be missing something!?
Yes, CH listens for 60+ sounds and translates them into the 11 different visemes, or mouth shapes. In our tests this got significantly better results than trying to track the mouth shape. Yes, live and imported audio are treated the same.
Seth definitely goes nuts for me if you crank up mouth strength to 500%. And the default Photoshop/Illustrator Blue guy, Stannie, should do it as well (you'll see different results depending on if the mouth is independent or not). Maybe you have an older version of the character? Latest is here: Adobe Creative Cloud
I've only recently joined CC, so I'm hoping I've got the latest version... Beta 5 (x141)?
Are the Neutral, Smile and Surprised Tags worked off the Camera?
Is there any plan to improve the Tracking Dots system? They do work well... it's amazing how they pick up the face. Found your idea of putting a light in front helps a lot. Got a cheap led desk lamp and it work really well.
I don't know if I thanked you for sending the link to your great files... I'm just playing with them at the moment... stripping them down and building them back up... I'll get there in the end!
I guess you're just about starting work! Amazing that around 4 o'clock here, the system starts slowing done... must be you lot coming online!?
I'm just about to put my feet up! So have a good one!
Yes, the 3 you listed are the only camera-controlled ones. Those only appear if there's silence.
Currently I think the tracking dots work pretty well most of the time, but if there's something that feels off let us know.