Long and short: No. That's simply not how it works.
Correction: The MB is a Rampage III Extreme - not Rampage II
So you're saying yes, AE CC 17 does make use of a single graphic card's GPU/CUDA cores (which it does) but, the GPU/CUDA cores of an identical, additional graphics card are ignored
Don't obsess over GPUs for AE. A single card is barely used in the first place, so getting a second GPU card is just a waste of good money.
Can you use two cards for other applications? You can? That's great! That's what you should get it for. Not for AE.
Hi Dave, That was true for older versions of AE that relied on CPU cores but beginning with CC 2015 (or 2016) AE moved to relying on the GPU to perform most operations, that includes Premiere and encoding in AME.
The cores are not ignored, but it won't scale the performance linearly. AE's infrastructure isn't there yet and even in AME and Premiere you can only do so many parallel operations regardless of how much hardware you have. Most of the time your second card will have nothing to do and to boot, since things like OpenGL drawing functions are indeed only using a single card, you may run into issues with this stuff when using certain plug-ins or in other programs. Just leave it as it is.
These are excellent videos for those unfamiliar with GPUs and how they affect the performance of After Effects. The videos clearly explain how newer GPUs from nVidia and AMD vastly outperform CPUs in many operations within the latest versions of AE, Premiere and AME, including faster previews and rendering:
At 00:02:35 in the Chromacide video you can see in the AE Preferences menu under Previews, Fast Previews, and GPU Information that AE recognizes there are dual GeForce GTX 960 cards installed in the system. In addition, these cards are each fitted with 2GB of VRAM and AE indicates that a total of 4GB of usable video memory is available.
But does your workflow actually make use and is tailored to use these GPU features? That's all it comes down to. I'm sure I could easily tell you how to create a (seemingly) simple project that nullifies all GPU acceleration by using a single feature that breaks the accelerated pipeline just as I could impress people by crafting an equally (seemingly) complicated project, exploiting all my 15+ years of AE knowledge to not break the hardware acceleration. See where this is going? Exporting a video straight via AME using a simple keyer or basic color correction is a whole different thing than most "real" compositing tasks, so I'd be wary with any such tech demos. They are meant to impress and tailored accordingly, while at the same time their relevance for your daily work may be totally zero. In any case, it's a typical "your mileage may vary" thing, but as far as AE goes, I still maintain my position that it's simply not worth it to spend lots of money on beefy cards or multiple card setups.
Yes. My workflow makes heavy use of GPU-accelerated features. And if you're using Windows then testing GPU acceleration is a simple matter with apps like CPUID that can show you what CPU and GPU cores are active in real time. (For example, if I disable GPU acceleration my overclocked 6-core CPU with Hyperthreading will show all 6 cores at 100% during certain operations - re-enabling GPU acceleration not only vastly increases rendering speeds but also drops the CPU core usable to nearly zero.)
FYI, Dismissing facts, deriding "tech demos" and bragging about "15+ years of AE knowledge" isn't impressing anyone. In fact it makes you sound like an arrogant, closed-minded troll. Now please, just go away.
FYI It's crady: I would like to thank Mylenium for his well-written, precisely formulated and helpful input.
I hope, he won't go away and I am suggesting he does not take your very rude and ignorant reply to him serious.
And, by the way: I have tested extensivley and I could not find a nanosecond of performance improvement in dual vs. single gpu usage.
Hi It's Crady,
You apparent;y know the answer to your own question & you obviously have your heart set on dual graphics cards & seem to be arguing in defense of getting them. So go ahead and get them. You don't need anyone's permission & apparently you don't need anyone's advice either. So, what are you doing here?
I use Dual Quadro 6000 6GB GDDR, on HP Z800 Workstation. But AE CC 2017 only use 6GB max as screenshot below
But when doing Render Export, AE uses ALL GPU equally:
It really boost rendering performance.
But when working on a 16Bit Color Depth comp and E3D 4K Texture Ray Traced Engine, I did not feel any significant performance boost. So i work on Fast Preview mode with 1/4 settings.
In PPro: If you choose to Render Queue to AME using CUDA, It's amazingly super duper fast!
Hope this info helps.
- Don't use SLI for Dual GPU Setup for Editing, Comp, or Grading.
- If you use multi-display, let all display run thru the 1st GPU, also for your Secondary Monitor Preview
- On Multi-display setup, if you put Workspace on the secondary monitor, Set Multi-Display Settings to: Compatibility Mode on Nvidia Settings, to prevent sluggish or lag.
- I leave my 2nd Quadro alone, without attaching any Display
- I have AJA Kona 3G Card to do Real time monitoring Preview