Skip navigation

nVidia GTX 680 now at Newegg

Mar 21, 2012 7:07 PM

  Latest reply: Harm Millaard, Apr 7, 2012 12:34 AM
Replies 1 2 3 Previous Next
  • Currently Being Moderated
    Mar 29, 2012 1:48 AM   in reply to John T Smith

    Just one novice question.

    Is this card already officially supported by Adobe in CS5.5 or do I have to wait for CS6?

    I guess since you are running benchmarks already there is a way to use it with CS5.5 Premiere and/or AfterEffects.

    Is there an simpel way to make CS5.5 products to recognize and use this new card. Or might that ne a problem because

    of the new architecture (Kepler instead of Fermi)?

    Thank you for your help.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 29, 2012 6:35 AM   in reply to C.-D. Schulz

    "officially" ? - good question.  I just checked myself, and no it is not, however I don't think they have updated this list for a long-time.

    http://www.adobe.com/products/premiere/tech-specs.html

     

    Funny though, I go by a deli every day on my way to work, and THEY have the time to update their list, i.e. "soup of the day " LOL. 

    You would think Adobe could also. Kind of amazing they don't have the manpower to do so, or don't think it's important enough.

     

    This new card works with ver 5.5 though.  Officially though - as of today, March 29, 2012, no it doesn't.

     

    Dave.

     

    and yes, take my post with a smile, I'm joking a little bit here, but in reality, yes, Adobe should be on the ball a little quicker.  A note should be there

    at least letting people know that the new Nvidia cards "x, y and z" will soon be officially supported.

     

     

    C.-D. Schulz wrote:

     

    Just one novice question.

    Is this card already officially supported by Adobe in CS5.5 or do I have to wait for CS6?

    I guess since you are running benchmarks already there is a way to use it with CS5.5 Premiere and/or AfterEffects.

    Is there an simpel way to make CS5.5 products to recognize and use this new card. Or might that ne a problem because

    of the new architecture (Kepler instead of Fermi)?

    Thank you for your help.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 29, 2012 6:38 AM   in reply to Bill Gehrke

    Hi Bill,

     

    That's not the results I was expecting to hear, it will be good to see others pipe in on this when they get their cards.  I have a feeling though, in different areas this card is going to greatly outshine

    your 480.  - uh.. let's hope that's the case :-)

     

    Dave.

     

     

    Bill Gehrke wrote:

     

    Well my GTX 680 arrived earlier than I guessed and with just one or two runs (and a couple of baseline runs with my GTX 480 the GTX 680 appears to be about 10 seconds faster on the PPBM5.5 MPEG2-DVD encoding test on my 5.0 GHz i7-2600K 32 GB RAM machine.  I just did not have enough time this morning to pull the GTX 580.  Was it worth $500 more, only time will tell but it sure was worth waiting for as my second machine has need an upgrade.  More details later.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 29, 2012 9:32 AM   in reply to David Zeno

    My system:

     

    OS Windows 7 Ultimate 64 bit

    Adobe Creative suite 5.5 with latest update

    Boot disk intel SSD 160 Gb sata II

    I7 980X @ 4119MHz

    Gygabyte GA-X58 UDR3

    24 Gb Kingston 1600 (9 9 9 27 36)@ 1420 (8 8 8 27 36)

    Raid 0 (2 Velociraptor 600x2)

    GTX 580

    Latest Drivers available

     

    My PPBM5 result with MPE:

    "66","51","43","5"

     

    With GTX 680

    "63","59","41","5"

    I am happy, if I consider that there is only a first drivers release for GTX 680.

     

     

     

    
     
    |
    Mark as:
  • Currently Being Moderated
    Mar 29, 2012 9:44 AM   in reply to Fabio Pis

    Fabio

     

    your Results are slower with the 680 and you say your happy. ???

     

    Baz

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 29, 2012 9:44 AM   in reply to Fabio Pis

    I'm not sure a driver update is going to change those numbers any.  PP isn't a video game.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 29, 2012 10:09 AM   in reply to Studio North Films

    Baz,

    quite similar result, and first driver release.

    I am happy because, now, I can use my monitors configuration (dell 27'', samsung 23 '' (3d vision ready) and Acer 3d projector with less pain, using a similar powerfull vga. Maybe I didn't expect so much, and I am confident in superior results with new drivers release.

     

    Jim, in my system I had not ever installed a single game.

    I use PP with both cards in the same way and for me, at this point, it is a great result (IMHO)

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 29, 2012 11:26 AM   in reply to Fabio Pis

    You must have something wrong I am getting about 10 seconds faster MPEG2-DVD scores over my 480 and 580. What driver are you using?  At nVidia they are now at 301.10  whereas the board shipped with 300.83   Incidently this driver is not backward compatible with previous GTX's   Of course one thing I never load is the 3D and PhysX drivers.

     

    The other major problem is that your motherboard and CPU cannot take advantage of the PCIe version 3.0 capability.  Unfortunately I just found out that my Sandy Bridge motherboard (GA-Z68XP-UD4) will not do PCIe 3.0 either as it is a revision 1.0 board and you need a 1.3 revision and my i7-2600K also is not PCIe 3. capable.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 29, 2012 12:40 PM   in reply to Bill Gehrke

    Yes Bill, I saw your results on PPBM5 and I think I have some codec or bios settings problem. Driver is the same (301.10), but maybe, I have to clean better old GTX 580 driver installation.

    Setting DDR to 1333 (7 7 7)  I have a little better benchmark in MPEG2-DVD and MPE on goes to 4.But this result is equal for both cards (580/680)

     

    I load 3D vision and physX drivers because I need for my job. I don't know if this is a problem for benchmark.

    As soon as possible I ' ll try a clean OS install

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 29, 2012 6:45 PM   in reply to Fabio Pis

    Fabio,  Jim, I believe was saying that Premiere Pro is not a video game, - meaning that videos games are the "apps" as Microsoft now calls them with Windows 8, that are taking full advantage of

    these new cards. 

     

    I'm getting a really funny suspicion  that you can max out the performance of Premiere, with a GTX 580 card.  A 10% gain in performance is nothing to write home about, and it may even be

    a fluke at this point.

     

    From what I understand, ALL rendering out to a video file is cpu based.  So video card would NOT aid in the render times.   I have a GTX 470, and either the time line is : RED, Yellow or Green.

    that's it.  There is no inbetween.

     

    When I load a new video file, it doesn't take x amount of seconds for a time line to turn any of those colors, it just is, - as soon as I load the file.  So this tells me, that nothing changes with the 680

    video card.  Sure it will play video back perhaps a tiny bit smoother, but perhaps maybe not even.

     

    Things are sure complicated at this point.

     

    I'm guessing that the motherboard you are using with your new video card is only PCI 2.  This is a bottleneck in the system, and it's 100% sure you are not getting the throughput that the card can

    give you, however for actual render time, - again - in theory, if you had NO video card in your computer, render times ( outputting to a video file ) should remain exactly the same.  This being I understand

    the basics of how Premiere works.  Premiere uses 100% CPU power to render a video file out to disk, and the video card does nothing to increase or decrease the times you ge with that rendering to

    an output video file.

     

    This stuff is rather confusing.

     

    Dave.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 29, 2012 7:57 PM   in reply to Fabio Pis

    I am confident in superior results with new drivers release.

     

    I would be surprised, as new driver updates aren't normally geared towards improving compute performance, they're generally tweaks to make specific games run faster or eliminate game errors.  And even then the performance increase is usually pretty small, 1% to 5% typically, which is far short of the 300% render performance increase we all wanted to see from a card with 3x the CUDA cores.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 29, 2012 11:10 PM   in reply to David Zeno

    David, Jim,

    I have put my benchmark data to help people to understand that for "experienced" PP user this card is pretty identical in performance with GTX 580 and maybe lower cards.

    I have bought this card for various reasons:
    1) Management of monitors
    2) power consumptions
    3) new tecnology
    I knew that probably I would not have had large benefits with it but for me was clear that 512 cuda cores (of my "old" 580GTX) are not absolutely inferior to "new" 1536 cuda cores( ATI stream processors docet) Nvidia marketing move.

    For me, pci express bandwidth is not the problem for now


    I think that with this movement nvidia (and its partner) it wants to push seriously using of the new graphical cards of the Quadro family, maybe.

     

    For these reason I am happy of my new card, probabIy I expected still less.
    I now hope to more clearly have express my thought
    And sorry for my bad english
    
     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 12:34 AM   in reply to Fabio Pis

    Fabio can you Please tell me the GPU Clockspeed of both GTX.

     

    I will use a GTX 6XX Series OOONLYY If I have PCIX 3.0 slot. "X79"

     

    Why????

    Memory interface

     

    Look:

    GTX580:

     

     

    SPECIFICATIONS GPU:

    CUDA Cores512
    Graphics Clock (MHz)772
    Processor Clock (MHz)1544
    Texture Fill Rate (billion / s)49.4

    MEMORY SPECIFICATIONS:

    Memory Clock (MHz)2004
    Config. standard memory1536 MB GDDR5
    Memory interface384-bit
    Bandwidth of memory (GB / s)192.4

     

     

    GTX 680

    Specifications GPU

    CUDA kernels

    1536

    Normal clock frequency

    1006

    Accelerated rate

    1058

    Texture Fill Rate

    128.8

    Memory specifications

    Memory Frequency ( Gbps )

    6008

    Amount of memory

    2048MB

    Memory interface

    256-bit GDDR5

    Full bandwidth.

    192.2

     

     

     

     

     

    IF MAN DO NOT HAVE A PCIX3.0 THE GTX 580 WILL GIVE YOU A BETTER PERFORMANCE.

     

    OOONNLYY THE CLOCKSPEED ON THE GTX680 CAN MAKE THE DIFFERENCE.

     

    Let see what happen whit a GTX 680 ON an MOBO X79 O.C.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 12:46 AM   in reply to Crist OC/PC

    for both cards standard reference clocks.

    No overclock for them.

    In my opinion, pciex 3.0 will not be the different. for bandwith I think 2.0 is not saturated today.

     

    as a note,

    monitoring GPU load with gpu z, it seems GPU is not much used during Premiere pro job (max 66% of load), for both cards.

    [IMG]http://gpuz.techpowerup.com/12/03/30/8pc.png[/IMG]

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 12:59 AM   in reply to Fabio Pis

    So you recognize that memory interface is superior in the GTX 580 even in the gtx570.. That means that a GTX570 O.C. 950MHZ will work better than the gtx680... under PCI Express slot.

     

    PCIX3.0 IT IS the big thing. And this is for everyone who is looking what Im writing.  PCIX 3.0 dobles bandwidth over PCIX 2.0.

     

    But well as usualy I will show some result wich speak by itself.

     

    Cheers.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 1:30 AM   in reply to John T Smith

    Is the GTX 680 on Adobe's official list of supported cards yet ?

     

    Also, I'm curious to know if that card is suitable for After Affects CS5.5. Can it help speed-up previews ?

     

    I'm currently using OpenGL via a 1GB AMD Radeon HD-6770 to keep the previews moving.

    It's a bit limiting, as it doesn't allow me to add basic effects such as  'Blur' to as many layers as i'd like.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 2:51 AM   in reply to John T Smith

    Hi,

     

    I have been following the discussion here - to help me decide if I should buy the GTX 680 for rendering video with PP, on a new dedicated X79  (i7 six cores) machine with PCI 3.

     

    This far, it seems like i should not.

     

    Fabio (today) says: "I think that with this movement nvidia (and its partner) it wants to push seriously using of the new graphical cards of the Quadro family, maybe."

     

    This suspicion seems to have support here:

    http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3 161-15.html

     

    So, for now, Quadro seems to me to be a better choise for rendering purposes.

     

    Transcoding:  Tom's Hardware benchmark of MPEG2><H.264 seems to have some problems with some software bug and the Espresso application, so I am not sure what degree of truth the result is representing. The better benchmark results for H.264 >< H.264/youtube format transcoding is more convincing.

     

    As for rendering speed, I suppose that this is more a CPU thing, rather dependant on the number of cores. I am not sure though how CPU cores are working against CUDA. So please correct me if I am wrong - it would really make me happy

     

    But if Tom's Hardware review is right in the conclusion that the GTX 680 is in fact deliberately reduced in computing power to not compete with nVidias own Quadro cards, it seems to me that GTX 680 is a dedicated gaming card - not a video rendering card. Unfortunately.

     

    I will continue to follow this thread - hoping that somebody proves me wrong

     

    Happy Eastern!

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 3:38 AM   in reply to RMO2011

    RMO2011

     

    Do you know what MOBO tomshardware use for this benchmark???

     

    Is that benchmark was made on a MOBO <> X79 ONLY aplay to mobo with PCIex 2.0

     

    You get my point?

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 4:02 AM   in reply to Crist OC/PC

    Hi Crist OC/PC.

    I get your point.

    The mobo used was a Gigabyte X79-UD5.

    As far as I know this mobo supports direct download of drivers for third generation PCI - as soon as you need it. Though the default to my knowledge is PCI 2.

    Total benchmark setup here:

    http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3 161-6.html

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 4:07 AM   in reply to RMO2011

    Do you have any Idea of the Clockspeed of all this GPU´S?

     

    Something look very wrong to me in that benchmark.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 4:53 AM   in reply to El_Plates

    NO.

     

    read post on this page, it has a direct link to official cards that are supported.

    El_Plates wrote:

     

    Is the GTX 680 on Adobe's official list of supported cards yet ?

     

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 4:54 AM   in reply to Crist OC/PC

    rendering a video file to your hard drive has NOTHING to do with the video card.

     

     

    Crist OC/PC wrote:

     

    Fabio can you Please tell me the GPU Clockspeed of both GTX.

     

    I will use a GTX 6XX Series OOONLYY If I have PCIX 3.0 slot. "X79"

     

    Why????

    Memory interface

     

    Look:

    GTX580:

     

     

    SPECIFICATIONS GPU:

    CUDA Cores512
    Graphics Clock (MHz)772
    Processor Clock (MHz)1544
    Texture Fill Rate (billion / s)49.4

    MEMORY SPECIFICATIONS:

    Memory Clock (MHz)2004
    Config. standard memory1536 MB GDDR5
    Memory interface384-bit
    Bandwidth of memory (GB / s)192.4

     

     

    GTX 680

    Specifications GPU

    CUDA kernels

    1536

    Normal clock frequency

    1006

    Accelerated rate

    1058

    Texture Fill Rate

    128.8

    Memory specifications

    Memory Frequency ( Gbps )

    6008

    Amount of memory

    2048MB

    Memory interface

    256-bit GDDR5

    Full bandwidth.

    192.2

     

     

     

     

     

    IF MAN DO NOT HAVE A PCIX3.0 THE GTX 580 WILL GIVE YOU A BETTER PERFORMANCE.

     

    OOONNLYY THE CLOCKSPEED ON THE GTX680 CAN MAKE THE DIFFERENCE.

     

    Let see what happen whit a GTX 680 ON an MOBO X79 O.C.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 5:13 AM   in reply to David Zeno

    "NO.

     

    read post on this page, it has a direct link to official cards that are supported.

    El_Plates wrote:

     

    Is the GTX 680 on Adobe's official list of supported cards yet ?

     

    "

     

    Not. sot what? like the GTX460, GTX 480, GTX560,80,90 .  They work any how.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 5:15 AM   in reply to RMO2011

    The Tomshardware benchmark using MediaEspresso 6.5 is using the new NVEnc transcoding engine and not CUDA. It’s similar to what Intel introduced in Sandy bridge with Quick Sync. To quote Toms:

     

    “a more purpose-built fixed-function pipeline capable of better performance at substantially lower power use.”

     

    It’s clear from the reviews that nVidia wanted to build a smaller more power efficient GPU for gaming which is why they compromised on the Compute side of things.

    That makes sense as for these consumer cards its gaming performance that is king so why compromise them with a higher manufacturing cost (larger die size) and higher power consumption just to cater for a niche area.

    For most home users the other advantage of having a GPU is for video transcoding and the new NVEnc feature will cover that.

     

    Tough luck for MPE users of course but there will surely be consumer versions of the BIG Kepler GPU due around Q3 so you won’t be forced to use Quadros to get better CUDA performance.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 5:17 AM   in reply to David Zeno

    So David Zeno take video card out and Make render.... .

    Who is saying so??? What are you talking about???

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 6:12 AM   in reply to eclipse_crow

    Hi eclipse_crow.

     

    Thaks for clarifying things!

    Very much appreciated!

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 6:19 AM   in reply to RMO2011

    present quadros cant even outperform the 570/580.. they certainly will not do so against a 680.

     

    whilst i definately believe the 680 is a crippled card (still dont have one to benchmark so i am talking out my butt for the most part)

     

    until the newer quadros are released this is just speculatation at best/conspiracy theory.

    i would not put it past nVidia to do so at all on the other hand if it were that easy they would have already done so.

     

    Cuda is Cuda and a good chunk of that programing has to do with games an area where they have to show best. worstation graphics is at best a small % of the market

     

    scott

    ADK

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 6:48 AM   in reply to Crist OC/PC

    I don't understand your comment.  The original poster didn't ask if he can hack a card to make-it-work.

     

    He asked if the 680 is "officially supported"   Perhaps you shold correctly read the question.  No need to be rude.

     

    I directly and accuracty answered the person's question.  Your response is no accurate, or "official"

     

    Dave.

     

     

    Crist OC/PC wrote:

     

    "NO.

     

    read post on this page, it has a direct link to official cards that are supported.

    El_Plates wrote:

     

    Is the GTX 680 on Adobe's official list of supported cards yet ?

     

    "

     

    Not. sot what? like the GTX460, GTX 480, GTX560,80,90 .  They work any how.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 6:50 AM   in reply to Crist OC/PC

    Do you own a copy of Premiere ?

     

    If you did, you would know that Premiere render speeds to an output file are based on your CPU ( a cpu is a computer chip on your motherboard ).

    The video card makes no difference.  I just assumed you knew Premiere Pro.  This forum assumes you know a little bit about Premiere and how it works,

    perhaps, read up a bit, and come back later with some intelligent remarks.

     

    Dave.

     

     

    Crist OC/PC wrote:

     

    So David Zeno take video card out and Make render.... .

    Who is saying so??? What are you talking about???

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 7:22 AM   in reply to David Zeno

    Actually David, you are wrong. According to Adobe when you export any of these attributes that are a part of your sequence , they are handled & sped up by a CUDA card. They include...

    • Alpha Adjust
    • Basic 3D
    • Black & White
    • Brightness & Contrast
    • Color Balance (RGB)
    • Color Pass
    • Color Replace
    • Crop
    • Drop Shadow
    • Extract
    • Fast Color Corrector
    • Feather Edges
    • Gamma Correction
    • Garbage Matte (4, 8, 16)
    • Gaussian Blur
    • Horizontal Flip
    • Levels
    • Luma Corrector
    • Luma Curve
    • Noise
    • Proc Amp
    • RGB Curves
    • RGB Color Corrector
    • Sharpen
    • Three-way Color Corrector
    • Timecode
    • Tint
    • Track Matte
    • Ultra Keyer
    • Video Limiter
    • Vertical Flip
    • Cross Dissolve
    • Dip to Black
    • Dip to White
    • scaling
    • deinterlacing
    • blending modes
    • color space conversions
     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 7:25 AM   in reply to Scott Chichelli

    Hi Scott

     

    Thanks for answering.

     

    "Outperform" - do you mean as a graphics card in a PP situation in a dedicated video machine? (Dedicated = as few processes as possible running - just clean OS with no crap + PP + necessary processes = a specialized workstation).

     

    And even if CUDA is "shared" between gaming and workstation graphics, one has little choise i guess - if you want to use PP you have to use the CUDA cores left over by gaming - in for instance the GTX 680. Is this correct?

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 7:34 AM   in reply to David Zeno


    David. I had never say that GPU is use in render. Im overclocker so Why whas that comment "rendering a video file to your hard drive has NOTHING to do with the video card." Then I did ask you:  "Who is saying so??? What are you talking about???"

     

    If I was an arrogant I will make you look like a stupid, but a do respect people and I just believe is a miss understanding.

     

    What I was pointed have to do  with Memory interface  betwens GPU 580-680 and the relevance of PCIe. 2.0 AND 3.0.

     

    Sorry if I hurt your feeilings.

     

     

     

     


     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 8:29 AM   in reply to David Zeno

    David Zeno wrote:

     

    Do you own a copy of Premiere ?

     

    If you did, you would know that Premiere render speeds to an output file are based on your CPU ( a cpu is a computer chip on your motherboard ).

    The video card makes no difference.  I just assumed you knew Premiere Pro.  This forum assumes you know a little bit about Premiere and how it works,

    perhaps, read up a bit, and come back later with some intelligent remarks.

     

    Dave.

     

    David, If you knew Premiere you would find your statement is wrong.

     

    Here is a plot from my system showing the GPU (my old GTX 580) usage on a PPBM5 run.

    GTX-580-MPEG-91%.jpg

    The low level choppy at the front of the GPU plot is during the PPBM5 H.264 encoding and the middle zero level is during the Disk I/O encoding and the end of the plot (91%) is from the MPEG2-DVD encoding.  Now Premiere users know that the MainConcept MPEG encoder does not use the GPU BUT the scaling, Pixel Aspect Ratio, Frame Blending, Time Remapping, Fast Color Correction and many other effects are handled by the GPU.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 7:55 AM   in reply to RMO2011

    RMO,

     

    Quadros are not as fast as the 5xx GTX cards

    Quadro 4000 is a 460 sorta  Quadro 4000 256 cores 89 GB/s ram bandwith / 460 has 336 cores 115BG/s memory bandwidth.  the major diff is the Quadros usually have more ram..

    Quadro 5000 is a 465  etc

    Quadro 6000 is a 470 etc

     

    someone i think Bill make a chart of this..

     

    the only time to buy a quadro card is for solid works definately NOT for adobe anything..

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 8:17 AM   in reply to Scott Chichelli

    Thanks Scott for this very clear statement.

     

    Now I am back to where I started: Buying the GTX 680 for video rendering. I just had to examine things first, to make sure I did the right thing.

     

    It's a shame that getting a valuation of graphics cards in a video rendering pespective is so difficult - it's mostly about gaming (because of the marketshare, I understand that).

     

    Thank you.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 8:20 AM   in reply to Scott Chichelli

    My impression is that some people are looking at the deemphasise in Compute performance in the mid range Kepler GPU in the GTX 680 and extrapolating that this means that nVidia want to force people to buy a Quadro for Compute performance.

    Personally I’d wait until the top end Kepler GPU is released before jumping to such a conclusion. They are surely bound to release consumer gaming cards based on the same GPU core which will have plenty of performance but just lack the workstation features of the Quadros; drivers et al.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 8:26 AM   in reply to RMO2011

    the other issue with workstation graphics benchmarks is most sites still use Specview perf which very heavily favors quadros and has not been updated with newer versions of the software.

    EG: Autocad is now Direct X and tefore no longer needs a quadro..

     

    the needing a quadro has to do with how in bed with nVidia the software manufaturer is (read how much help they need coding Cuda etc)

    could also mean they are paid by nvidia one way or another (trying not to say too much)

     

    the only difference is firmware (and usually more ram)

    hardware wise they are the same cards GTX vs quadro.

     

    well said enough.. time to shut my yap before i go too far

     

    scott

    ADK

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 1:26 PM   in reply to lasvideo

    Actually I'm only 1/2 wrong, that makes you 1/2 wrong also.  See, if you create previews, Premiere can use the previews to speed up final export to a video file on your hard drive,

    and *that* is 100% cpu based, not GPU.  If you had known this, you would have indicated this in your post but you didn't, but now you know.  It's a good way to speed up the export,

    but only works for certain file types.  1/2 right is better than being wrong though isn't it :-)

     

     

    lasvideo wrote:

     

    Actually David, you are wrong. According to Adobe when you export any of these attributes that are a part of your sequence , they are handled & sped up by a CUDA card. They include...

    • Alpha Adjust
    • Basic 3D
    • Black & White
    • Brightness & Contrast
    • Color Balance (RGB)
    • Color Pass
    • Color Replace
    • Crop
    • Drop Shadow
    • Extract
    • Fast Color Corrector
    • Feather Edges
    • Gamma Correction
    • Garbage Matte (4, 8, 16)
    • Gaussian Blur
    • Horizontal Flip
    • Levels
    • Luma Corrector
    • Luma Curve
    • Noise
    • Proc Amp
    • RGB Curves
    • RGB Color Corrector
    • Sharpen
    • Three-way Color Corrector
    • Timecode
    • Tint
    • Track Matte
    • Ultra Keyer
    • Video Limiter
    • Vertical Flip
    • Cross Dissolve
    • Dip to Black
    • Dip to White
    • scaling
    • deinterlacing
    • blending modes
    • color space conversions
     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 1:29 PM   in reply to David Zeno

    Like export, creating previews will also use GPU acceleration for effects and other rendering.

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 30, 2012 1:36 PM   in reply to Crist OC/PC

    hey, sorry to think you were rude.  All is good.  I think email makes everyone read one thing and think another, email and posts on the Internet never really do give out

    the correct "expressions" that somebody may be tying.

     

    I am a happy person, but sometimes I may write stuff that ticks somebody off.  I'm not right all the time, and I guess now I'll keep my mouth shut, no more comments from me,

    and I'll only ask questions, seems this is a better thing to do :-)

     

    I hope to see some more results from people using the new video cards, and the new GTX 685 specifically when it comes out in August.  I just cannot see myself putting out $600

    for a video card in a week or 2 weeks, knowing that a "new" Nvidia card is coming ou in August, not a nice game Nvidia is playing.  I'd be very upset and disappoint if I just put out big $$ for a video

    card, and then find out they have one that is so much faster for only $200 more.   When you factor in keeping a graphics card for 2 or 3 or 4 years, $200 is not even worth talking about.

     

    We go out and put $60 of petrol in our automobiles every few days, and don't even think about it.

     

    Dave.

     

     

    Crist OC/PC wrote:

     

     

    Sorry if I hurt your feeilings.

     

     

     

     


     
    |
    Mark as:
Actions

More Like This

  • Retrieving data ...

Bookmarked By (1)

Answers + Points = Status

  • 10 points awarded for Correct Answers
  • 5 points awarded for Helpful Answers
  • 10,000+ points
  • 1,001-10,000 points
  • 501-1,000 points
  • 5-500 points