• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
0

2 Sound Questions

Guest
Nov 14, 2016 Nov 14, 2016

Copy link to clipboard

Copied

1. I'm profoundly deaf... when I talk.. my voice is either too low, too loud or indistinguishable.. This isn't a great problem for me... only others and Ch..  It seems that although there are obvious  tracking dots around the mouth, they don't seem to interact with the puppet, unless there is sound. I've checked the Behaviours, but I think I must be missing something... Can someone please help.... I want to make the Visemes appear from my face/mouth movements?

2. To hear... I have the output directly plugged into my implant.... sounds strange I know.. but it works.. I feel sound, more than hear it... Anyway... back to my second question; When I go to record a second piece... like just the drag-gable hand movements.... I don't hear the previous recording, I only see the actions... this is quite difficult when you are trying to match words etc!? How do I get to hear the previous recording, while recording the seconded track on top? I hope that makes sense!?

Views

2.0K

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Nov 14, 2016 Nov 14, 2016

Copy link to clipboard

Copied

Unfortunately I can't answer your second question, only the first one.

Character Animator animates the mouth based on the content of what you're talking, so it analyzes the sound. Only 3 default mouths are applied based on the camera: surprised, neutral, smile

What you can do is assign keystrokes to the phonemes (this is what the Simpsons people did if I understood this correctly) and then let your puppet talk by pressing the keystrokes.

You can also record audio (or have someone record the talking for you) and then load it into Character Animator and use it as the sound source.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guest
Nov 14, 2016 Nov 14, 2016

Copy link to clipboard

Copied

Hello Monika and thank you for your prompt reply!

I understand the method of changing the mouth images by using triggers, but it was the use of the tracking that I was more interested in. A lot of mouth shapes are made without words, although I know you could add this as extra images, but this adds another step to the process... assigning triggers!?

When you zoomed into the Camera & Microphone Panel... you can clearly see all the tracking dots around the mouth ( made up of two lines)  and how they track the movement of the mouth really well, but it seems that only 3 variables are used; are they in line together, apart, or are the end tracking dots wide. It looks as though there is a lot more that could be done with these... hopefully in the future!?

I used the term Visemes as this refers to mouth shapes made and one viseme/shape can be used for several phonemes as used in Character Animator use.

Thank you again

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Nov 14, 2016 Nov 14, 2016

Copy link to clipboard

Copied

1. The louder and clearer the mic signal is without clipping, the better. Note in Lip Sync there is an option for "Keyboard Input" - if this is on, then when you press the first letter of each of those mouth sounds, it will trigger them. For extra stuff like frowns, yells, etc - we do recommend making these their own custom key-triggered layers. And Monika is correct about the current 3 webcam-controlled behaviors. We may add more in the future, but we have found that many users prefer the reliability of key triggers (press F to frown as opposed to frowning to trigger and not seeing it if the lighting/angle/etc is off).

2. You should be able to hear previous recordings, so it's possible the input/output settings may be not set up correctly. Check Character Animator > Preferences to edit audio preferences. If I record audio I listen via headphones so it won't get picked up on the microphone and add extra noise.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guest
Nov 15, 2016 Nov 15, 2016

Copy link to clipboard

Copied

Hi Dave and thanks for the full reply!

1. Live audio is a bit of problem, but the suggestion you and others have given has helped.

I was fascinated that the imported audio file worked well and I was left wondering how the lips are synced based on an audio signal.... but I suppose that is down to the algorithm created.. in simple terms.. a particular sound triggers a particular mouth shape!? Is the same method used for imported audio as for live audio?

I suppose I was thinking/hoping, that there was an algorithm used by the Tracking dots around the mouth, that recognized when a particular  shape was formed.. a corresponding viseme/mouth-shape would be triggered!? Maybe an idea for the future?

2. You were spot on! I have to plug myself in to hear the sound and use an external Focusrite audio interface, but the settings had selected the default, Built-in Output. I changed the settings and it's working great now!

Many Thanks

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Nov 14, 2016 Nov 14, 2016

Copy link to clipboard

Copied

You can also adjust the "Mouth Strength" parameter in the Face behavior to use the shape of your mouth on camera to change the puppet's mouth size. By default, it's set to 0 (off).

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guest
Nov 15, 2016 Nov 15, 2016

Copy link to clipboard

Copied

Hello Victoria... and thank you for taking the time to reply!

I opened up Seth... one of Dave's Character to try out your suggestion, but although I tried this via the Puppet and Scene... I didn't notice any difference.. even when I went to 500%. I must be missing something!?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Nov 15, 2016 Nov 15, 2016

Copy link to clipboard

Copied

Yes, CH listens for 60+ sounds and translates them into the 11 different visemes, or mouth shapes. In our tests this got significantly better results than trying to track the mouth shape. Yes, live and imported audio are treated the same.

Seth definitely goes nuts for me if you crank up mouth strength to 500%. And the default Photoshop/Illustrator Blue guy, Stannie, should do it as well (you'll see different results depending on if the mouth is independent or not). Maybe you have an older version of the character? Latest is here: Adobe Creative Cloud

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Guest
Nov 15, 2016 Nov 15, 2016

Copy link to clipboard

Copied

Hi Dave...

I've only recently joined CC, so I'm hoping I've got the latest version...  Beta 5 (x141)?

Are the Neutral, Smile and Surprised Tags worked off the Camera?

Is there any plan to improve the Tracking Dots system? They do work well... it's amazing how they pick up the face. Found your idea of putting a light in front helps a lot. Got a cheap led desk lamp and it work really well.

I don't know if I thanked you for sending the link to your great files... I'm just playing with them at the moment... stripping them down and building them back up... I'll get there in the end!

I guess  you're just about starting work! Amazing that around 4 o'clock here, the system starts slowing done... must be you lot coming online!?

I'm just about to put my feet up! So have a good one!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Nov 15, 2016 Nov 15, 2016

Copy link to clipboard

Copied

LATEST

Yes, the 3 you listed are the only camera-controlled ones. Those only appear if there's silence.

Currently I think the tracking dots work pretty well most of the time, but if there's something that feels off let us know.

Good luck!

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines