Hey folks -
I saw this thread about scaling a 1080 sequence down to 720 and thought I'd give it a try. I usually do the scaling when I export my clips to MPEG4, so I wanted to see if this was any different. Now, I didn't follow the instructions completely on my first pass because I forgot to back-click on the sequence and Scale to Frame Size. So my exported video ended up being cropped. Well, no problem... easy mistake to fix. The second time I went through it, I remembered the Scale and all was good.
However, what I'm curious about is the CPU usage during both exports. I sort-of expected the scaled export to use a lot more CPU than the cropped one. But that's not what happened, and I'm interested to know why? The video itself is about 8.5 minutes, the media is AVCHD, and the output MPEG4. The cropped version (ie, the mistake) took Media Encoder 5:49 to complete. The scaled version took 7:08.
The CPU usage during the cropped export:
And the CPU usage during the scaled export:
Clearly when AME and PPro were exporting my mistake, they were able to make better use of the cores on my Mac and rip through it over a minute faster than the scaled version. Now, I'd have expected the cropped one to take less time, but I would also have expected the scaled version to eat more CPU cycles. I'm guessing there's some interesting interaction with the nVidia card going on here, but I'm not certain.
Someone will ask me about the hardware even though I actually think it's as relevant in this case:
The CPU usage is lower because scaling is done on the GPU if you have a card that can be used for CUDA processing (which you do). But even with the work being done on the GPU, scaling is much, much, much more computationally difficult than cropping, so it's not surprising to me that the scaling took longer than the cropping.
There are details about scaling with CUDA here.
Europe, Middle East and Africa