Copy link to clipboard
Copied
Hi, mouth not working properly - it only lip syncs the neutral layer when I increase mouth value. Help! Not cycling through other mouth sounds.
I am pretty sure all staggers are relative to the scene. So it you drag the body, do that first and then drag the hands. It is a bit painful. (I would like relative drag gets too - makes recording easier - maybe we should create a feature request. Add a new dragger behavior and set a property to make relative to parent layer...)
i use position x and y recordings instead. Less convenient but it works. Hey, maybe we should get a twitch channel or something... or YouTube... to do little demos
Copy link to clipboard
Copied
Could you expand a bit please? (The screenshot is great though.) What does “increase the mouth value” mean? Are you talking about a property of the Face behavior? Are the other mouth positions appearing, but not in sync With the voice track? Is this curing an exported video or while editing? Could you share the generated vizemes in the timeline which are not causing mouth positions to appear? When you mention “not cycling through other mouth sounds”, are you talking about cycle layers, or just that the other mouth positions are never displayed?
The basic pupet structure seems okay, although I normally hide all the mouth positions except neutral. Have you checked the tags for each of the mouth layers? Did it get them all right?
Also you only have a subset of the vizemes included - so many of the mouth positions will fall back to Neutral. There are more you have to define to make all the mouth positions visible.
In the timeline view after you have created a take, you can right click on take in the timeline to manually add you own vizeemes to a take. This can be a useful way to create each vizeme to see if it works. Almost like saying the alphabet! 😉
My *guess* is you have not created image layers for all the vizeemes, so only some are working. You need to create all of them (Even if some are direct copies of other ones). Neutral is the default it falls back to if all else fails.
Copy link to clipboard
Copied
Hi Alan, none of the mouth vizeemes are working - only way I can get any mouth movement is with turning up face behavior properties and then it only effects the neutral layer.
I created the same layer pattern and sounds(vizeemes) with another character and it works fine.
HELP!!!!!
Copy link to clipboard
Copied
Also I want to drag the entire character(puppet) however the arms also have dragging behavior. My question to you is can I drag entire puppet if there are others body parts which have dragging behavior? Can there be sub(specific arm, leg etc) and overall dragging(entire puppet)?
Copy link to clipboard
Copied
One more thing. Having issue of head moving naturally. In the puppet I attached photo of above. Neither head nor body have the crown activated yet still issues.
Copy link to clipboard
Copied
I am pretty sure all staggers are relative to the scene. So it you drag the body, do that first and then drag the hands. It is a bit painful. (I would like relative drag gets too - makes recording easier - maybe we should create a feature request. Add a new dragger behavior and set a property to make relative to parent layer...)
i use position x and y recordings instead. Less convenient but it works. Hey, maybe we should get a twitch channel or something... or YouTube... to do little demos
Copy link to clipboard
Copied
I will try positioning and different takes for dragging - good idea. Also I'm down with doing Youtube videos as demos. Currently working on multiple projects - they are moving along however I can only devote a few hours a day to them so they are taking a substantial amount of time.
Copy link to clipboard
Copied
Hi Alan. New issue - I want to add a song to animation however I want the puppet to lip synch the lyrics. Is this an option in Character Animator? Would the audio ie guitar, drums etc make lip sync unfeasible?
Thank you for the guidance - I've come a long way.
Copy link to clipboard
Copied
Glad you are making progress!
A few ideas. You can add vizeemes manually, but that sounds like a drag. (It can be useful for doing minor adjustments however.)
If you don’t have the words and music separate, you can try it, but my guess is it will get confused. (You can however just delete all the vizeemes where only music is playing, so maybe it won’t be too bad...?)
So how about this option. Record yourself singing (or even just saying) the words in tempo with the music (put some headphones on to listen to the real music while singing). Generate the vizeemes in Character Animator from this audio track. Then mute/delete the audio track and unmute the real music track for the final video generation. Character Animator won’t care that your voice is not the same as the final sound track.
And if you cannot hold a note as long as the real singer, just drag and edit those vizeemes in the timeline to line up correctly with the final track.
Oh, and feel free to raise new issues in the forums - it helps others find them and the solution. (And you can “mark as resolved” each topic individualLyn.)