6 Replies Latest reply on Aug 22, 2008 8:30 AM by Mylenium

    GeForce GTX 280 being able to use all 240 shader cores for video encoding

      how is the fact that the new geforce GTX 280/260 has CUDA parallel processing and AGEIA PhysX processors affecting AfterEffects CS3

      alot of videos shows nvidia demonstrating CUDA with these cards being able to render a video with Adobe Primiere up to 10x faster than a 3ghz Quad core CPU because it runs with 10.000 threads through the 240 shader cores

      http://www.youtube.com/watch?v=lP1v_3OCDq0

      http://www.youtube.com/watch?v=8C_Pj1Ep4nw
      New Nvidia GFX cards used to encode HD videos 2xReal Time

      buttom line how is AfterEffects comming along with CUDA technoligy
        • 1. Re: GeForce GTX 280 being able to use all 240 shader cores for video encoding
          Mylenium Most Valuable Participant
          >how is the fact that the new geforce GTX 280/260 has CUDA parallel
          >processing and AGEIA PhysX processors affecting AfterEffects CS3

          Not at all. AE uses OpenGL and what they do in their techdemos by replacing a few DLLs/ implementing a plugin in Premiere is pretty much their kettle of fish and so it is for AE - every developer can feel free to implement such functions, but, to be so blunt, seeing that there is not yet any real market for it, anyone doing so is wasting his energy more or less.

          Mylenium
          • 2. Re: GeForce GTX 280 being able to use all 240 shader cores for video encoding
            Level 1
            The upcoming plugins for Adobe applications, promising a 16X speedup for functions like encoding video, would certainly seem to be a huge advantage for us. That is, if their encoder plugin produces Blu-ray-legal h.264 video in all the flavors that the current Adobe Media Encoder does. Who knows? Maybe CS4 will have CUDA built in.
            • 3. Re: GeForce GTX 280 being able to use all 240 shader cores for video encoding
              Level 1
              " anyone doing so is wasting his energy more or less."

              hmm if i can gain a 10 times faster rendering, i can see how it could be a waste of time ? - hope anyone manages to build such a plugin soon
              • 4. Re: GeForce GTX 280 being able to use all 240 shader cores for video encoding
                Mylenium Most Valuable Participant
                >hmm if i can gain a 10 times faster rendering, i can see how it could
                >be a waste of time ? - hope anyone manages to build such a plugin
                >soon

                Well, for the end user certainly, but see this from a developer's perspective: If he wants to create a plugin, he needs to sell it and depending on how specific and specialised a tool is, the numbers may be low due to lack of market demand. in this case it is further complicated by the lack of market penetration for the cards themselves - how many users do you think actually have a CUDA enabled card? perhaps the ones that bought a new computer this last half year, but compared to the rest of millions of other users, these numbers would be utterly unimportant. A developer is not going to develop a plugin he may only sell to 100 users. Simple business math stands against it. These things will come, but not now. I predict that it takes at least two years to see more widespread support on that end, but even then it becomes a question, whether it is CUDA, OpenCL or any other such proposed "standard". until then, openGL pixel shaders are a much better way to provide some of these advantages and reach a mass market...

                Mylenium
                • 5. Re: GeForce GTX 280 being able to use all 240 shader cores for video encoding
                  Level 1
                  the whole coda processing language is opensource - and most plugins for flasg and photoshop are free - this is not a bundle of actions and effects this is single plugin for 1 process - and the fact that is opensource would really push the advanced but common c+/opengl programmer to share his work.
                  • 6. Re: GeForce GTX 280 being able to use all 240 shader cores for video encoding
                    Mylenium Most Valuable Participant
                    Na, you're talking *beep*. Without a generic CUDA interpreter on the host end, what use is there in the tools, sample implementations and sample code being available for free? Why should a developer, who has coded this stuff by himself for AE and invested a lot of energy and time, consider giving it away for free? You know, it's not that a herd of NVidia-employed people would just wait for someone like you to come along and beg for converting his most-used AE plugins to CUDA...

                    You're confusing availability of technology with how it could be used in a specific area of commercial software or at least free extensions to a commercial software. And then CUDA has not really been available to the end user until recently - the spec may exist since 2005, but it had little or no influence on them since you couldn't buy/ afford cards supporting it. In the GeForce 7xxx series only a few higher-endish models did and even then only with experimental drivers. Likewise, CUDA is not even supported by all mainstream 8xxx or 9xxx cards due to buggy hardware or flakey drivers and all your theoretical benefits turn to ashes, if you don't have the right one.

                    You are also assuming that all code on this planet could be easily converted to be used in a parallelized environment like CUDA, which is just as wrong. The language may be C-like, but limitations in register handling, thread management and so on will still require massive changes for some parts of the code. It's no different than it was with OpenGL. Think of it! How long did it take before programmers mastered GLHSL and ARB extensions so they could be used to provide advanced realtime shading in 3D programs? A whole lot longer than it took to write up the spec... It is/ will not be much different with CUDA.

                    Sorry, things don't work this way and there's a crucial difference between a tech demo, sample implementations, the theoretical possibility and the actual practical application. As I said - these things will come, but only in so many years, when other, much more sales-driving applications also use CUDA for simulating physics in games, decoding HD video and god knows what. At this point the relevance of this stuff for end users is close to zero and they won't go out of their way to be able to use a "dead" feature, regardless how great it's theoretical potential is. It's just the same **** as intel selling multicores to everyone when in fact very few programs are really able to use them fully. At this point, these things mostly fall into the "marketing blurb" category....

                    Mylenium