2 Replies Latest reply on Jun 16, 2016 6:06 AM by SanFranJohn

    Fine tuning Visemes

    SanFranJohn Level 1

      I have my head puppet rigged.  It actually works!  I am having a problem with the visemes however.  In only a few instances do the visemes seem to 'recognize' what sound is being spoken.  "Ee" might be interpreted as "Ah" and so forth.  I did my puppet in Illustrator and, instead of drawing mouth shapes, I drew different colored geometric shapes so that I could recognize if the viseme functions actually worked.  Some sounds they recognize, some sounds they don't and, instead, interpret them incorrectly..  I am native born and don't have  a regional accent.  I did try speaking with accents but it didn't make a difference. My microphone is working and they green bars  are very active.  Anyway, is there any way to adjust the visemes as one can do with eye movements, head strength, etc ?  Thanking anyone in advance.  This is an incredible program!! 

        • 1. Re: Fine tuning Visemes
          Dan Ramirez Adobe Employee

          http://adobe.ly/1UdNa05

           

          Here's a testfile to help you learn how we trigger the phonemes. For instance, the way I say "ah" actually triggers "uh". "ah" is triggered by making the short "e" and "a" sounds.

           

          There is no way to adjust anything with the lip sync, but I'm very interested in allowing the user a little more control. I think a setting to set the noise floor so the mouth doesn't chatter with background noise and another to control the "swappiness" would be a great place to start.

          • 2. Re: Fine tuning Visemes
            SanFranJohn Level 1

            Ummm That was very helpful and I want to thank you.  It doesn't solve the central issue of whether or not one can be sure if the visemes are actually interpreting the spoken phonemes correctly.  Even a simple sound triggers a series of shape changes.  I suppose that, in the long run, it doesn't make too much difference.  The mouth moves a lot anyway and it is, after all, a cartoon.  I think that if people see any movement at all they'll be impressed.  The face tracking feature is incredible!  I hope they extend it to more facial expressions.  Maybe, in the future, we'll be able to define expressions.  Thank you again.