Why should that matter in any way? I mean, I work on a GTX 580 with 3GB and E3D runs just fine. Support for E3D has nothing to do with Adobe's official support or endorsement, anyway. You're being fussy over what is essentially totally irrelevant.
I'm being fussy?
There are people returning their 970 and getting full refund. So there is something going on. As a CC subscriber who pays ~$60 a month I thought I could simply ask the experts(!) on the forum if they knew the VRam issue would be be noticeable in AE (or other AE related plugins). Because you know, I paid $450 and would like to have it perform as it should.
"I work on a GTX 580 with 3GB and E3D runs just fine."
Translation: "I can work fine with my GTX 580, so why the hell would you bother getting a better GPU and expecting it to work in full power? If it just works, then don't worry about technical stuff, who cares if only 3.5gb out of 4gb of its VRAM works in advertised speed??"
Anyway, thanks for the detailed response. I remember next time to call Adobe directly to ask my questions. Oh wait, they will connect me to a dude in India who knows nothing about what he is talking about, reading from the scripts in front of him...
Way to go, Adobe.
Well, can you even measure the difference of those oh so "measly" 56 shader units vs. the promised 64? Do you even know what "perform as it should" means when you don't have an actual card with the correct spec? Do you even know if plug-ins like E3D could actually use the extra juice? I mean, E3D could have built-in limits in the code and OpenGL has its own limits in the first place... You are only seeing one side of the equation and it doesn't make much sense from a technical point of view. We're not talking about games where people get all whiny when it doesn't run at 60 FPS in 4k with all details on. We're talking about offline/ deferred rendering here, where stuff doesn't need to be rendered in realtime and actually never is in the first place (think about multi-pass DOF or motion blur in E3D or the various multi-sampling options), even more so in a program like AE that has extremely limited realtime capabilities to begin with. Sorry, but this is one of those "fictional quantum computing" discussions with no basis in the realities of how it's even relevant for everyday work.
Mylenium, it might be time for you to take something with the same last 3 letters as your screen name. Starting with 'Val'.
That's some deeply condescending contempt you have for this guy. Do you two know each other in real life?
Omid, what Mylenium fails to mention is that the GTX 580 is the best consumer video gaming card ever created for content creation software. So he has no problems because of that, and the fact that it's a very old card, so has full driver support, and comes from an era when everyone actually cared a little about these things, and it was purpose built to be good at content creation before Nvidia decided to deliberately decapitate the floating point abilities of their consumer gaming cards.
Those days of great gaming cards being great for content creation are over.
It's all about fighting with Apple, now. Or fighting for the scraps left over. And Nvidia hasn't done so well in the fight with Apple, or in the fight for mobile relevance. They don't seem to ever be able to actually deliver on their promised mobile CPU/GPU combos, and the phone makers aren't flocking back to them until they have a couple of hits, after years of false starts in the field.
That aside, CUDA is brilliant. Just nobody wants to help Nvidia because they're Nvidia, and because of existing relationships they want to support more. So CUDA hasn't gotten the support it deserved, and Nvidia needlessly crippled the gaming cards that should be great for content creation because of this... and on and on it goes...
Adobe isn't in that fight, Adobe isn't fighting anyone. They have a complete monopoly over 2D content creation software. There's never been anything like this kind of monopoly in a software space, not even Windows has ever come this close to complete dominance of a software industry. Perhaps Microsoft Office in its heyday was similar. Maybe. But not really. Adobe also don't spend money on finding ways to assist consumers to get more out of their hardware. They're only interested in increasing revenue, and that's not part of it.
Features that look sexy are what drives more subscriptions, and more apps being added to the suite. Ever more apps, doing ever more, all with a little less reliability than last time, and ever flagging performance.
Nvidia, ATI/AMD and everyone else is in the fight of their lives with Apple. Fortunately for all of them Apple keeps leaving large chunks of opportunity on the table by their own omissions and errors. But that's a whole other subject.
Apple should have put the 980M in the Retina iMac. Then it would be a decent proposition. But, no, they hamstrung it with retarded ATI/AMD tech because they're still slapping Nvidia over the head for a past perceived transgression, and because they're probably considering buying them if they get much weaker. Or buying AMD, as they have hired some of their best engineers over recent years.
The answer to your question is YES, if the card is under performing, you are being affected, and you're due a refund.
And you should probably take a look at second hand 580s if you're at all interested in content creation. They're kind of special, in many many special ways.
Whether or not you're able to ascertain the difference doesn't matter, you are being affected. 'How could you possibly know if you've never had a fully functioning 970?' That seems to be Mylenium's point of view.
Which is kind of like saying "how could you know this milk is stale if you've never had fresh milk?" When the box says "FRESH MILK!" and is being sold as FRESH MILK!!!
It's not your problem that you're being lied to, it's there's.
Kind of like Adobe saying that AE is ready for Yosemite... but for these caveats of "several frames" (actually an entire second or more of frames) before you can RAM Preview on it.
It works, yeah... sort of.
If you'd never previously used AE you might accept this "several frames", but everyone else is peeved.
"Several Frames" most can put up with. Several dozen frames is an entirely different issue.
Consider this, Mylenium has the highest "rating" of any person in these AE "forums". And this is as close to AE support as you're going to get.
Pay particular attention to Rick Gerard, he's genuinely knowledgeable AND considerate. Todd is the boss of AE, and has a similar attitude towards customers as Mylenium. They might also look similar.
'How could you possibly know if you've never had a fully functioning 970?' That seems to be Mylenium's point of view
That's certainly one part of the problem... Well, whatever. This thread has become pointless and I'm out...
>Well, can you even measure the difference of those oh so "measly" 56 shader units vs. the promised 64? Do you even know what "perform as it should" means when you don't have an actual card with the correct spec? Do you even know if plug-ins like E3D could actually use the extra juice?
Yeah I don't know the answer to these questions, but what I know is that this is a forum in which people who don't know the answer to some questions can come and ask them. If I wanted to get replies from narcissist douchebags bragging about how they know the answer to everything and I don't know, I would go to other forums. The whole concept of this forum is to ask questions.
>This thread has become pointless and I'm out...
Nobody wanted you to participate in the first place.
Thank you so much for taking the time to reply to me.
>Consider this, Mylenium has the highest "rating" of any person in these AE "forums". And this is as close to AE support as you're going to get.
Yeah, this is quiet sad. I've talked to their tech support on the phone and they had no idea what they were talking about. I'm sure it wouldn't be like this if Adobe had a competition in this industry.
Anyway I will see if I can change the 970 to 780ti. I will do more research. Don't really want to go with a second hand 580.
Before you make that switch, consider reading up from well respected review sites about the issue.
The Maxwell architecture used in 970 beats the 780Ti's Kepler arch in terms of CUDA performance, even if the cards now suddenly has less ROPs and L2 cache and uses a lot less power.
The full 4GB VRAM is usable, but the last 0.5 section is slower (cca. 28 GB/s vs 192 GB/s), but it's still a great bandwith and would not degrade rendering performance significantly as pointed out by PCPer, Tom's HW, VR-Zone etc.
You are both right and wrong.
Right in the sense that the 970 uses a memory paging structure where the first 3.5 MB of VRAM is used at full speed, but when the full 4 MB of memory is needed, the last 0.5 MB slows down a bit. The 980 does not suffer from this architectural choice.
Wrong in the sense that this memory addressing choice by nVidia has absolutely no impact at all in AE, since AE does not use the video card in that way.
In very rare situations, when editing 4+ K material and using complex timelines, PR may suffer a bit in terms of performance ONLY when more than 3.5 MB of memory is used. That performance hit is very small and far preferable over for instance the 780Ti with only 3 MB memory, since in these very rare occasions, the 780Ti is automatically turned off completely, switching from hardware MPE to software MPE, making the performance hit far more significant. This happens when more than 3 MB is required.
Look at the situations below:
- AE: the video card is not relevant.
- PR: memory used > 3 MB -- the 780Ti is turned off completely. Very severe performance hit.
- PR: memory used <= 3.5 MB -- the 970 runs at full speed.
- PR: memory used > 3.5 MB -- the 970 experiences a small performance hit when using the last 0.5 MB.
I did not say that this choice has no impact, as per my original reply I commented: "The full 4GB VRAM is usable, but the last 0.5 section is slower (cca. 28 GB/s vs 192 GB/s), but it's still a great bandwith and would not degrade rendering performance significantly"