• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
0

Why is lip sync so slow to process?

Enthusiast ,
Mar 10, 2017 Mar 10, 2017

Copy link to clipboard

Copied

Hello,

I have Ch installed on my 6-core, 4.5GHz, 64GB RAM PC, and it takes about 10 minutes to compute 3 minutes of lip sync (Compute Lip Sync from Scene Audio). If I look at the processor usage, Ch is only using around 1% of available processing power. I've also checked my GPU usage, in case Ch is offloading the work to my CUDA graphics cards, but no, it's not doing that. The source audio is on a fast SSD, so that's not the source of the slowdown. So what's going on? How can you write some code that runs so incredibly slowly, and, more importantly, is it likely to be improved? I could understand Ch maybe using one processor at 100% if it's some old code not written to take advantage of multi-processor systems, but that's not the case.

Views

271

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Mar 10, 2017 Mar 10, 2017

Copy link to clipboard

Copied

Yeah that seems strange - lip sync computing can take some time, but I've never seen it take that long.

Is the SSD an external drive? If so, do you see any improvement by switching the location to an internal one?

What audio format are you using and what specs does it have (sample rate, bit depth, etc)?

If you try to compute for a super simple puppet like the default blue guy from the Welcome panel, does it go any faster?

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Adobe Employee ,
Mar 10, 2017 Mar 10, 2017

Copy link to clipboard

Copied

LATEST

That operation is about 8-12x faster in the internal builds versus Beta 5. The original usage pattern was just for keeping up with audio as it was recorded, but a small adjustment opened it up to go much more quickly in the case of pre-recorded audio. That particular code doesn't actively exploit parallelism, but in recent builds it does at least keep 1 core saturated to go as fast as (serially) possible (which in your case probably means it'd use closer to 16% of available processing for your 6 core system).

I'm still surprised that it was taking so much more than the duration of the clip since lip sync is generally able to keep pace as audio is recorded. Without looking at whether there's something particular to your audio file, I'm not sure how to explain that part. Is there anything unusual about the clip (sampling rate, number of channels, etc)? Do you have other clips that finish in a time closer to the duration of the clip, or do all compute lip sync operations behave like this on your system?

If it was on a Mac and you'd switched another app in front while you were waiting, app nap could slow the signalling of the processing loop and stalling it out, but it sounds like you're on a Windows system and I'm not aware of anything as heavy handed as app nap can be on Windows.

Thanks,

Dan Tull

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines