Transcoding is a CPU matter. The only things that are hardware accelerated are scaling, blending and blurring. Everything else is software or CPU only.
My guess is that 680 support is not far off, since it is now supported in AE. See Mercury, CUDA, and what it all means
If Adobe libraries are not using CUDA or the GPU to speed up transcoding processes, it can be done and is part of the purpose as far as I can see from Nvidia's developer site. An example of gpu transcoding artice with data: http://www.tomshardware.com/reviews/video-transcoding-amd-app-nvidia-cuda-intel-quicksync, 2839-7.html
More transcoding with Nvidia: http://www.steves-digicams.com/knowledge-center/how-tos/video-software/converting-video-fi les-with-nvidia-cuda.html
And the Nvidia cuda dev site: http://developer.nvidia.com/cuda/cuda-toolkit
When I drop a finished H.264 Blu-ray file into a sequence and export it from Premiere into AME, my graphics card shows about 40-60% regular usage throughout the encoding, no matter what format I am exporting to. But if I just drop the same original H.264 file into AME and set it to the exact same encode output preset, then the graphics card is not used at all.
Like David Weidengarger said, most video formats do not support any GPU encoding and it's all CPU - and what you just wrote outlines what's really going on. Whether or not encoding or transcoding a video uses the GPU is 100% controlled by the codec. It has nothing to do with the software, as the software makers didn't invent the codec, they're just using it. So unless Adobe invents their own video format that has really nice GPU-based calculation capabilities, you're stuck with however much effort the original codec designer put into making their product.
Unfortunately, most video formats/codecs don't allow for GPU acceleration. You're always going to see really unusual performance discrepencies between the different format options.
But, as David pointed out, adobe just grabs a copy of the DLLs involved with each codec, includes them in the installer for Premiere, then pretty much lets them sit there So if there's a new version of Divx or Xvid or Matroska or WMV or H.264 that supports GPU acceleration and/or CUDA, you don't typically get that update since Adobe has to thoroughly test it first before sending it down as an update, assuring their customers that it works flawlessly.
That's why, sometimes the codec manufacturers make the update available as plugins or unusual hacks or manual codec replacements in your system and all that fun stuff
[[ excessive quoting removed by admin ]]
Encoding/transcoding in Adobe products to MPEG2 and H.264 is 100% CPU done by the Mainconcept codec (notice that Harm did not say 100% of the formats). This what Adobe uses and I have no doubts that they will not change that until either Mainconcept changes or someone else can equal or improve the quality Many of the video effects are handled in the GPU as are scaling, blending and blurring frame rate conversion.
Hi, I have written a 'proof-of-concept' (i.e. unsupported beta) NVidia H264-encoder plugin for Adobe Premeire Pro CS6. If you are interested in trying it, go the following forum-post:http://forums.adobe.com/message/5458381
Couple of notes:
(0) GPU requirement: NVidia "Kepler" GPU or later (desktop GTX650 or higher, laptop GT650M or higher.) The plugin uses the new dedicated hardware-encoder (NVENC) introduced with the 2012 Kepler GPU.
(1) the "ideal" best-case speedup (over Mainconcept H264) is roughly 4-5x on a consumer desktop PC (single-socket PC, Intel i5-3570K). Naturally actual results vary on the source-video/render.
(2) Interlaced video encoding is NOT supported. (I couldn't get this to work; I think it's a GPU-driver issue.)
(3) Only uncompressed PCM-audio is supported (no AAC/AC-3 audio.) Also, multiplexing is limited to MPEG-2 TS. If you want to generate *.MP4 files, you'll need to do your own offline postprocessing outside of Adobe.
(4) In terms of picture-quality (artifacts, compression efficiency), NVidia hardware (GPU) encoding is still inferior to software-only solutions (such as Mainconcept or x264.)
In short, don't expect the NVENC-plugin to replace software-encoding any time soon. It's not a production-quality product. And even if it were, software-encoding still has a place in a real workflow; until consumer hardware-GPU encoding can match the video-quality of the Mainconcept encoder, you'll still be using Mainconcept to do your final production video renders.
cuda is meant for general computing including encoding.
if you performe h246 encoding using cuda on gtx680 it can be done 1 minutes for a video that takes 90 minutes on i7 cpu.
At that speed, the output of the hardware-encoder would be so poor, the video may as well be disposable. NVENC is NOT faster than Intel Quicksync; actually Quicksync can be substantially faster. But NVENC (currently) holds the slight edge in compression-quality.
Off on a tangent, CUDA in MPE acceleration offers both speed-advantage and quality-advantage, because the CUDA video-frame processing is highly parallelizable, and can exploit the GPU's numerous floating-point computational-arrays to speed-up processing and do more complex processing. That's a double-win. So what does this have to do with encoding? Right now, hardware video-encoding (which comes after the video-rendering step) only offers improved speed. My experience with NVENC has shown me it does not improved video-quality. At best, it is comparable to good (and slower) software-encoding when allowed high video-bitrates. At lower video-bitrates (such as Youtube streaming), software-encoding is still A LOT better.