1 person found this helpful
We are constantly working on lip sync to make it better, for everyone regardless of nationality or language. If there are particular sounds that don't seem to work well for you, please let us know here. Or even better, upload a short unlisted video to YouTube and show us how it messes up. And maybe try different example puppets to see if the mouths feel better or worse for you...sometimes it's a subjective thing what works best.
You also have the option to edit your lip sync after recording, as explained about 3 minutes into this video: Recording & Editing with Takes (Adobe Character Animator Tutorial). This will be even easier in future releases, we're working on it.
Hope that helps a little...
thanks of the reply, I will see if i can put together a youtube to show the current results.
One of the most obvious problems is that it quickly reverts back to the neutral pose between other sounds.
I will check that video out for now, thanks again.
I am having the same sort of lip syncing problem.
for example: No matter how loud & clear I talk, whenever I'm making the W-Oo sound or the Oh sound, it picks up L. My puppet shows his tongue and everything. It makes me have to constantly edit the vismes after every performance.
If possible, can you post here or DM me a link to an audio file that shows off your issue? Like, say a bunch of W-Oo or Oh words that seem to be incorrectly showing up as L sounds? That would help us tremendously and possibly fix the issue for future releases.
I've found that for example the English pronunciation of "Oh" as in only (nli) gets picked up way better than the German or Spanish ones. I assume that's because the "Oh" sound in "only" is a rather easy to hear diphtong while both German and Spanish "ohs" are really close to a-class vowels.
As I have mentioned before - but not yet made a video of it, sorry - there is a similar focus on English with the letter/viseme "R". While "r" as in "further" or "corpus" get picked up just fine, the "rolling" "r" that hispanics use so fondly as is "Tarrrrrrragona" seems to not find any customers within Ch. I don't think that the French and German "R"s get picked up well, either. They are also glottoral, like the US/UK "R"s, but are produced higher in the mouth, they sound like the "H" that Slavish people use when trying to speak English, like when they say "Chralftime" instead of "halftime". Basically, the German and French "R" are "H"s with more pressure and power, produced in the same mouth area.
This is all not very linguistic, I know, but I still hope this helps.
…and to make Ch lip-syncing usable by any languages, spending all my life pondering different languages and ways of pronunciation (being raised bilingual), seems realistic only if Ch provides the possibility to influence the actual "picking-up system", ideally I would be able to record sounds and then tell Ch what viseme to use on them.
While the lip-sync keeps impressing me every time after so many months of working with them, I also have the same kinds of problems that are being mentioned in this thread a lot and the mistakes, for me, too, are recurrent.
Yes, if you could send us an audio file with these recurring issues / sounds, that would be helpful!
Just finally did the recording. PMd you the whole project, hope that helps! Thanks for making the world a better animated place.
Hey, sorry for responding late; I wanna show you some of these files so you can see some of the issues I'm having with lip sync.
Can I share a DropBox folder with you? I wanna send you an archive folder that has audio, screenshots of my unedited vismes, a video of how it looks together, and the .puppet file.
Sure that would be great.
I’m Dutch and also experiencing problems with the O and Woo sounds translating in an L sound or something.
Would be great if there was a tool inside CH that would learn how every user would like the visemes to appear and if wrong, we could correct them. That way every user could have it’s own unique and correct lipsync
Agreed, that would be a nice feature.
Is there a way I can add additional mouth shapes to edit after the lip sync takes into effect? For instance, I am saying "A" on a certain part, however my only options are "Ah" "Uh" "Oo" & "Oh."
W-oo is definitely buggy. The shape being captured is obviously tighter and smaller but it does not recognise it (sad).
Oh and I tried to post a jpeg and png exported from Adobe Photoshop itself only to receive an error ... guess what the error was ... "File Type Forbidden.
Not sure what going on with the forum for the screen shot... but I did not quite understand your comment about "W-oo" being buggy.
The way I normally set up a Mouth for other expressions is I have
- Mouth Group
That is, I have "Mouth Group" listing all the non-standard mouth positions, followed by a "Mouth" under which go the standard visemes that Lipsync uses. I then set up two swap sets - one for the standard visemes ('Smile', 'Neutral', plus any other standard visemes I want to trigger) with 'Neutral' as the default trigger in the swapset, plus another swapset at the Mouth Group level ("Happy", "Grin", Angry" etc plus "Mouth" as the default in the swapset). So by default it uses visemes, but if I apply the "Angry" trigger that hides the "Mouth" group and displays the "Angry" layer instead. This have worked pretty well for me once I set it up this way.
Putting non-standard visemes in the "Mouth" group did not work so well - I had the standard layer being displayed at the same time as the non-standard layer. By hiding the whole "Mouth" group it got around the problem.
- Mouth Group
I'm presuming here you are trying to have it lip sync live. I'm a composer of children's songs. I have found that the lip sync works best when I just have the program lip sync to the recorded voice parts only. It's not perfect, but way better than if I have the instrumentation with it. Generally, it records too many sounds, which I can get rid of. If I just concentrate on the long notes, it looks fairly good.
My suggestion as a work a round, would be to record your voice into a decent digital recording (I use Logic Pro X) and have the program do it automatically. You would then import that as an audio file, then select it and select the scene you want. You then go up to the timeline drop down menu and select "Compute lip sync from scene audio" Good luck.