• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
0

Old, but just the thing!

Guest
Nov 04, 2016 Nov 04, 2016

Copy link to clipboard

Copied

I'm just starting with Character Animator and I'm looking at everything online, as you do, just to get a feel of the program.

I saw the video below.. that was made in 2012...  and was really excited by this... as this is what I'm looking for to make my dog, Finn to talk!

However, I'm not a user of After Effects, I just subscribed to CC, and AE, for the CA app, so my question is, is this feature now a part of CA and if so how do I use it? Where do I find information in the manual or tutorials?

Views

1.4K

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Nov 04, 2016 Nov 04, 2016

Copy link to clipboard

Copied

Automatic lip sync is available in Character Animator via its Lip Sync behavior. More info here:

Adobe Character Animator Help | Behaviors

You can find video tutorials at http://www.adobe.com/go/chtutorials or by clicking the Video Tutorials link in the Welcome panel within the application.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guest
Nov 04, 2016 Nov 04, 2016

Copy link to clipboard

Copied

Hi Jeff and thanks for the prompt reply!

I can see the lip sync behaviors, but is the same method as the 2012 video shows? Drawing a line along the line of the lips?

I'm new to CA, so got some way to go, but I'll stick with it..

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Nov 04, 2016 Nov 04, 2016

Copy link to clipboard

Copied

No, that video is of a third-party script for After Effects. It's not related to Character Animator.

In Character Animator, if you draw your layers representing the visemes in Photoshop or Illustrator, then either speak into the microphone in Character Animator or import an audio file (MP3, WAV, AIFF) and use the Compute Lip Sync from Scene Audio command, your puppet character's mouth will select the matching viseme layer based on the audio.

Hope that helps a little bit.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guest
Nov 04, 2016 Nov 04, 2016

Copy link to clipboard

Copied

Hi again! I didn't think it was!

I get the method you are talking about... this is a typical animation technique I believe!?

I was hoping that because you get the markers placed on your face and CA picks these up and  translates the movement of these markers, I would be able to click on a point on a picture and set to the appropriate marker? If that makes sense!? If you check out Crazy Talk out.. they use this idea, but the product is so buggy!

However, it looks as though it's back to old school animation and lots of layer creation?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Nov 04, 2016 Nov 04, 2016

Copy link to clipboard

Copied

The mouth is just selected from a set of layers either chosen by the webcam (Neutral, Smile, and Surprised based on the face tracking dots in the Camera & Microphone panel) or microphone (visemes for Ah, F, W-Oo, etc.). It doesn't currently do any warping of a single image of the mouth. Do you prefer that capability vs. pre-drawn expressions/visemes?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guest
Nov 04, 2016 Nov 04, 2016

Copy link to clipboard

Copied

I could see how CA was working, but I suppose I was thinking that because you can tag groups of the tracking dots, you may have been able to select individual ones and then it would be a fairly simple matter of warping/morphing a single image. This is obviously what is shown in the 2012 video and caught my interest.

There is a lot of work involved and a fair bit of knowledge and skill needed in creating each layer part corresponding to the phonemes, expressions and visemes. You not only have to have some knowledge of using the applications themselves, but you have to have the artistic skill to create those individual shapes in a way that would work seamlessly with the underlying image. When you are using cartoon images this isn't as great a problem, but when it's a photo, or more complicated image!?  The 2012 concept, that has been dropped, would certainly be more usable by many more people. Also the idea of a stand alone applications certainly would be more feasible? If the programming is there, it wouldn't be much of a leap to incorporate it into the tag pallet.. would it!? The phoneme and expressive shapes could be created using the tracking dots and tagged, so when your mouth moved the image would change accordingly... much as it does now!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Nov 04, 2016 Nov 04, 2016

Copy link to clipboard

Copied

Our initial focus has been to drive mouth animation of pre-drawn expressions, but we can evaluate how to adapt the technology for users who might not have separate mouth layers set up already. Thanks for that feedback.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guest
Nov 04, 2016 Nov 04, 2016

Copy link to clipboard

Copied

LATEST

I would be happy to test any adapts you make....

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines