I rally to your question for I'm not 100% sure but what I gathered so far is this:
"Ah" is "A/E" like in "Acid" or "Erroll",
"Uh" is "A" like in "Albany" or "Under".
"Ee" is "ee" like in "England" or "Eagle".
"W-Oo" is "W/(j)U" as in "Woody" or "United"
"Oh" is just that.
The thumbnail images give a good indication of what's meant with each viseme.
It also helps to talk with your puppet in a slow and extra clear way "That. You. Would. Normally. Use. For. Deaf. People. Or. Tourists".
The consonants are pretty much those consonants, only you want to keep in mind that "K" is dealt with by "S", "Th" by the "L", see here:
Improved lip sync and new/revised visemes
Lip sync accuracy has been improved again in this build. Also, the number of visemes has increased from 10 to 11, but with several changes:
- “R” added
- “Th” merged with the existing “L”
- “Oh–Uh” split into “Oh” and “Uh”
- “K” renamed as “S”
Existing artwork using “Th” and “K” layers get the new tags, and “Oh–Uh” layers get the“Oh” tag.
First.. you made me laugh when you mention talking slow to Tourist and the Deaf.... I thought it was only the English that did that! Believe me... Deaf people don't talk slow!
Thanks for your input... The Visemes that Ch uses obviously represent several phonemes per Viseme... and are cartoon based, not for real speech. Often the viseme will work for the beginning of a word.. so the mouth changes shape as you say the whole word, but I can see that it would be a big ask to get all the appropriate phonemes for every word spoken. The Visemes work well, but I just wanted to get an understanding of how they are formed, so thanks again for your input and link!
glad you found it useful!
Sorry if I didn't put it too well, but I definitely didn't mean to say that deaf (or hard of hearing) people talk slow at all, only that when someone doesn't know how to speak sign language it can help to speak slower and clearer to them which also applies to tourists who don't know the local language too well…
It helps also to find out about the visemes and how they work. But no matter how much you practise, I think it is impossible to avoid "jumps" or inappropriate combinations that you mention - they are due to the fact that Character Animator cannot possibly know what will come after this or that viseme or sound.
That's why I strongly believe that the Nutcracker Jaw behaviour will give the most harmonic results in the future if Adobe manage to track the mouth more accurately than now.
Ideally, the software would have trackers for the upper lip and lower lip and then simply follow the movements of my mouth and jaw instead of trying to hear and analyze what I may be mumbling. Talking about mumbles: The Audio Lip Sync is relatively "immune" to variation in amplitude, although you can set mouth sensitivity up in the project panel, under "FACE"-->"Mouth Sensitivity".
Further you can fine-tune whatever you have been recording by going to
TIMELINE-->Split Lip Sync into Visemes
Please call me Richard... now I'm getting to know you!
I wasn't offended by your comments... believe me it takes a lot to do that and you put it just right!
Speaking clear is very important! I wasn't born deaf, I've only gone deaf int the last 10 years and there is a big differences between people born deaf and those that go deaf... I can't lip read a Deaf person... they lack intonation. Strange I know!
Link to Timeline Sync... very good.... I like that touch!
It does look as though the Trackers do follow the lips and mouth shape and do this well, but it just doesn't seem to have be matched up to the Visemes!? Shame. A lot of mouth shapes made without words being spoken... another good thing deaf people can do.... say a lot with a look, an expression and mouth shape. I know you can add a shape as an image and set trigger to it, but this has got to be limited... wouldn't you think!?
Just something completely different... have you noticed how everything at Adobe slows when the shop opens at 9 over there!
thank you and you can call me "Stefan", although Wombat is also fine with me, I totally love that animal although I know him from youtube only… :-)
Cool. It's just: English is not my first language so sometimes I'm not sure about what I'm creating and on the other hand it's good enough for everyone to think I know exactly what I'm saying. Well I do, mostly, but sometimes my English is not completely up to it…
It does look as though the Trackers do follow the lips and mouth shape and do this well, but it just doesn't seem to have be matched up to the Visemes!? Shame.
Do you mean the combination of Nutcracker Jaw Tracking and Audio Lip Sync? I play around a lot with these parameters:
- The Nutcracker Jaw sensitivity (normally under 10% for this kind of task)
- The Jaw handles (you can add the "Jaw" TAG to any layer that you want. Thus you can create more than one "Jaw Handle" that will be affected by the N-behaviour.
- The Jaw artwork graphic(s) itself (themselves).
- The basic arrangement.*)
*) For example I think it should be possible to create a Nutcracker Jaw that includes the lower lip plus an upper lip that uses the Audio Lip Sync but I haven't yet figured out a way to do that to satisfying effect.
No, I haven't, but another different thing just came to my mind: Have you tried painting with Adobe Illustrator?
A lot of mouth shapes made without words being spoken... another good thing deaf people can do.... say a lot with a look, an expression and mouth shape. I know you can add a shape as an image and set trigger to it, but this has got to be limited... wouldn't you think!?
I think that this is very important for a good puppet: To have as many face expression key triggers as possible. Check how "Wilk" is artworked and rigged, as a starting point.
I am not sure what limits you mean as far as I can see what you can include in forms of shapes and triggers is only limited by time and imagination…
It's better to be able to control intuitively with the face though.
Example: I've found "odd calibration" of the camera extremely useful for different expressions.
(This example works best for eyebrows that are "V" shaped.)
Say you want your character to look angry for a rant but you don't want to create extra eye lids and brows to be key-triggered separately, and say you don't want to have your forehead to cramp after a while, what you could do is:
- RAISE your eyebrows and then
- SET RESTPOSE.
Your puppet will frown whereas you don't have to. If you do frown in addition, the puppet will look even angrier.
For a surprised monologue you can do the same thing with the opposite calibration.
(Quite simple and banal a trick if you ask me, but I found out about this only recently so you might find it helpful…)
When I said about limits... I was just referring to the keys on the keyboard. The more you use the more you have to remember and try and use.... if you know what I mean!?
- RAISE your eyebrows and then
- SET RESTPOSE.
Great tip and such a simple idea.... it's these that can make the difference!
Now that's a good idea... I bet you could get a cheap plug in keyboard and rip the keys out to modify your own!?
Will certainly give this a go!
Thanks Monika... watched the Simpsons Live and it is very good.. shows the potential of Ch!
@ Limits: I see! Yes, sure. It can get confusing quickly. I think the english keyboard shortcut for showing all of the keyboard triggers you have installed in a puppet is CMD/CTRL + /
Can't find an overview of the keyboard shortcuts, though… Maybe there isn't one, yet…?
@ odd calibration: Glad you like it. : )
@Simpsons: Thanks for posting this!
So it works. Great!
Yeah, a friend sent me the article the other day - very interesting! Thanks.