Skip navigation
Currently Being Moderated

Will dedicated gpu speed up LR4 to drive a high resolution monitor?

Feb 19, 2013 11:28 AM

Tags: #problem #lr4 #lightroom4

I am planning to order new custom made Win7 desktop based on i7 3770K and Asus P8Z77 mobo. The monitor will be 27'' wide gamut with resolution 2560 x 1440. I am aware of the complications caused by such monitors, but decided to choose it.

 

My main and most demanding application for this desktop will be LR4 (no CS6, games, videos, etc.). To maximize its performance I will have 16 MB RAM, a SSD for OS and LR4 app and a second SSD for cache, previews and catalogs + 2 TB HDD for image storage.

 

Current version LR4.3 is still claimed 1) to be slow in driving high resolution monitors and  2) not to utilize GPUs efficiently.

 

Asus mobo with 3770K/HD4000 is specifed to be able to drive a monitor with above resolution through its DisplayPort. However, would LR4 be faster if I would add a decent dedicated GPU  for the monitor?

 

In case the answer is no, would an addition of second smaller monitor (max resolution 1920 x 1080) to above set-up change the situation and make a dedicated GPU beneficiary? Basically  Asus mobo 3770K/HD4000 by specifications can also drive simultaneously dual monitors with above resolutions. However my concern is slow LR4 ...

 

Thanks for any assistance.

 
Replies
  • Currently Being Moderated
    Feb 19, 2013 11:50 AM   in reply to RuisKukka

    Lightroom does not use GPU.

     

    You may have other reasons (i.e. games, videos) to add a GPU, but don't get it for Lightroom.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 19, 2013 11:02 PM   in reply to RuisKukka

    RuisKukka wrote:

     

    Do you really mean that LR4 does not utilize gpus at all when buiding of images in a frame buffer for output to a monitor?

     

    LR doesn't use GPU...other than to send the screen data to the video card for display on screen. Photoshop uses the GPU but so far, neither Camera Raw nor Lightroom do.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 20, 2013 4:25 AM   in reply to Jeff Schewe

    HI,

     

    I would buy an other video card. Something cheap. As far as I know, intel gpu and cpu share the i/o interface. This will drop the io rate of the cpu(=LR).

    I have only one small ssd 64. Gb but use intel srt with very good results.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 20, 2013 9:45 AM   in reply to RuisKukka

    I would NOT suggest buying a "cheap" graphics card for use in a high-end system. The bottom-line total system cost savings will be minimal.

     

    There are numerous applications that currently benefit from GPU acceleration provided by even a "mid-range" priced GPU.

     

    I would give serious consideration of purchasing an NVDIA Quadro or ATI FirePro graphics card, which both support a 30 bit display path using DisplayPort connectors. Here are two midrange choices you might want to consider:

     

    ATI FirePro V4900 ~$160

    (1-Dual-link DVI, 2 DisplayPort, DisplayPort to DVI adapter)

     

    NVIDIA Quadro 600 ~$150

    (1-Dual-link DVI, 1 DisplayPort, DisplayPort to DVI adapter, DVI to VGA adapter)

     

    You will need to spend about 2x the price for the next performance level GPU, such as NVIDA Quadro 2000 and ATI FirePro v5800 (both about $375).

     

    You can check both CPU and GPU performance at PassMark for comparisons:

     

    http://www.passmark.com/

     

    The i7 3770k quad-core processor may perform as well as the i7 3930k six-core processor with LR4, based on reports by other users in this forum. This seems to be due to LR's inability to efficiently use more than four-cores with hyper-threading enabled.

     

    Message was edited by: trshaner Added procesor comment.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 21, 2013 1:36 AM   in reply to RuisKukka

    RuisKukka wrote:

     

    Looking from purely this point of view do you think that a dedicated GPU would speed up LR4 compared to the integrated GPU of 3770K? Based on previou replies I understood that LR4 calculates the frame buffer in a CPU and sends it as a ready data to GPU to drive monitor i.e. the performance of a GPU does not matter as long it does the job.

    I think it may be a little bit faster, but not a lot. Why do I say that? - because a rip-snortin' graphics card will have faster memory than the rest of the system, and hardware to match. Thus, theoretically, graphics data can be pushed to graphics subsystem faster by cpu (fewer "wait states"...), and pulled from memory faster by graphics card. Disclaimer: although I used to understand the hardware in considerable detail (decades ago), I have not kept current, so take with salt... That said, my personal experience when removing high-end graphics card from system (to try onboard instead): not noticeably slower. Study was not scientific. Maybe trshaner or another engineer in the forum (ssprengel?) understands it better.

     

    PS - I never put the high-end graphics card back in, since high-end graphics cards tend to be more problematic than onboard, due to more complex drivers, etc. (and I don't do enough 3D to talk about, which is where high-end graphics cards really shine).

     

    Cheers,

    Rob

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 21, 2013 1:30 PM   in reply to RuisKukka

    RuisKukka wrote:

     

     

    Looking from purely this point of view do you think that a dedicated GPU would speed up LR4 compared to the integrated GPU of 3770K? Based on previou replies I understood that LR4 calculates the frame buffer in a CPU and sends it as a ready data to GPU to drive monitor i.e. the performance of a GPU does not matter as long it does the job.

     

    No. For use with LR the integrated Intel HD Graphics 4000 in the 3770K processor will perform the same as a high-end graphics card. It does use a small amount of system memory, but since you are installing 16GB of system memory this is not an issue.  For Photoshop CS4-CS6 the integrated HD 4000 GPU should also perform very well, but will not support 30 bit display path. 

     

    If you're just going to use the system with LR then there is no benefit or need to purchase a PCI-e graphics card at this time. If at some future date LR adds GPU support then it would make sense to purchase a dedicated PCI-e graphics card based on recommendations from Adobe (i.e. LR5 maybe?). I t would also make sense to do so if you purchase PS CS6, which currently supports 30 bit display data path.

     

    A 2560x1440 High resolution display will slow down LR, especially when in full-screen mode or as a second display with a large window size. The best solution for now is to pull in the R, L & Bottom panels to make the display preview size smaller. This will help to reduce the display rendering time. There should be no need to purchase a 2nd lower resolution display for use with LR. Other LR forum members currently using a 2560x1440 display can perhaps provide other suggestions. Please let me know if you have any other questions.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 21, 2013 2:14 PM   in reply to RuisKukka

    Hi,

     

    I seems, like "I started" an interesting discussion...

     

    I use a ATI HD6450 which costs around 70 bucks when I bought it. It drives my 1920x1200 display perfectly and plays HD 1920@30 frames.

     

    I tried to find something on the web that convice my feeling, that gpu and cpu shareing the same memory interface, might be a performace problem. But I faild. So I tried to benchmark my gpu with LR. I used one of my 18mpx cr2 raw files and made a lot of diffent ajustments. I peaked my i2500k@4.3ghz at 100% for a long time (2 minutes) during my ajustments. But the gpu never hit more than 20% regardless of "fit to screen", "1:1", "4:1" or "8:1". So, this just shows, what we already knew: LR uses only CPU and even a cheap gpu can not slow it down.

     

    Btw. I found out, that LR in 1:1, x:1 view only uses one core of my cpu in develpe mode. If you move a little bit around in the image you will find out in task manager (ProcessExplorer).

     

    So, to come back to your question. I (me!) would by a cheap card just for my "feeling". But you can try without first, and if it "hurts" by a card.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 21, 2013 4:06 PM   in reply to trshaner

    trshaner wrote:

     

    ...the same...

    Are you sure there is no performance difference?

     

    Although my experience is that there is no noticeable difference (observed unscientifically), my theoretical understanding is that there should be a minor performance improvement using dedicated (faster than system) ram of graphics card. i.e. the purpose for graphic card memory is not just to keep from using system ram (and not just of benefit for 3D).

     

    I am confessedly out on a limb here, still - I'm having some doubts...

     

    Cheers,

    Rob

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 21, 2013 10:11 PM   in reply to RuisKukka

    You're welcome for my part.

     

    I think it's fair to say, regardless of the actual performance difference in video updating using card vs. onboad graphics, that that piece of the puzzle is not much of a bottle-neck, and therefore has virtually negligible impact. Although just what are the primary bottlenecks remains somewhat of a mystery, since people with rip-snortin' CPUs don't get as much improvement as it seems they should, and when upgrading to SSD or massively parallel raid configurations, are still often disppointed.

     

    Not sure 'zactly... anyway: my advice: don't get your hopes up too high for the new system - Lr can be kinda sluggish even on top-flight equipment.

     

    R

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 2:26 AM   in reply to Rob Cole

    Rob Cole wrote:

     

    Lr can be kinda sluggish even on top-flight equipment.

     

    Indeed. My new i7-3820 / SSD / 32 GB system that I initially had such high hopes for, is beginning to show lagging in some circumstances. Not bad, but...slightly annoying. I'm trying to track down what software I might have added lately.

     

    Another fun fact: the massive files from the Nikon D800 seems to make no difference whatsoever. Even my remaining "slow" machine, an i5, behaves identically whether D300, D700 or D800.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 2:41 AM   in reply to twenty_one

    twenty_one wrote:

     

    an i5, behaves identically whether D300, D700 or D800.

    Amazing how varied are the performance reports. D800 takes ~twice as long to process (render raw) as D300 on my AMD machine.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 2:54 AM   in reply to Rob Cole

    What a mess this is . I'm glad it's not my job to sort it out. There are probably about fifty different root causes for the perceived slowness.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 2:58 AM   in reply to twenty_one

    twenty_one wrote:

     

    There are probably about fifty different root causes for the perceived slowness.

    Probably .

    ~R.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 4:01 AM   in reply to Rob Cole

    Rob Cole wrote:

     

    trshaner wrote:

     

    ...the same...

    Are you sure there is no performance difference?

     

    Although my experience is that there is no noticeable difference (observed unscientifically), my theoretical understanding is that there should be a minor performance improvement using dedicated (faster than system) ram of graphics card. i.e. the purpose for graphic card memory is not just to keep from using system ram (and not just of benefit for 3D).

     

    I am confessedly out on a limb here, still - I'm having some doubts...

     

    Cheers,

    Rob

     

    Rob, how many people in this forum with poor LR performance have posted, "LR runs much faster now that I installed a new high-end graphics card!"

     

    LR is currently CPU bound concerning display rendering. A high-end graphics card with its own dedicated DDR5 graphics memory will definitely outperform the Intel HD 4000 integrated graphics. But I seriously doubt you would be able to see any difference in LR's performance let alone measure that difference. Also, the Intel HD 4000 GPU is about 55% faster than the previous HD 3000 GPU and in fact good enough for some gaming applications:

     

    http://www.anandtech.com/show/5771/the-intel-ivy-bridge-core-i7-3770k- review/8

     

    The OP is better served keeping his system as simple as possible since throwing huge amounts of resources ($s) at LR doesn't appear to help much. As mentioned by twenty_one, "There are probably about fifty different root causes for the perceived slowness." Our only hope is that Adobe eventually adds GPU acceleration in LR, which will then benefit from humongous dedicated graphics adapters.

     

    Peace Brothers

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 4:26 AM   in reply to trshaner

    This is one of the most useful discussions I've read for a long while! I've often wanted to know more about how LR uses the graphics card, and what grade of card is best. Now it seems no card can be best, if it avoids driver problems that seem so common. So I'm just about to take mine out and see what happens.

     

    Thanks everyone,

     

    Bob Frost

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 4:37 AM   in reply to trshaner

    trshaner wrote:

     

    how many people in this forum with poor LR performance have posted, "LR runs much faster now that I installed a new high-end graphics card!"

    None. but that wasn't the question.

     

    I know, you know, we all know (those who've read this thread anyway), Lr won't run much faster with a high-end graphics card.

     

    The question was: will it run any faster? My theory is that it will, but I can't back that claim with any certainty.

     

    From your previous response it sounded like you were claiming there would be *absolutely NO performance benefit whatsoever*. - maybe you were exagerating, or maybe you meant that literally. If the latter, maybe you have some knowledge/experience to share? - thus the question.

     

    Granted its somewhat academic, so feel free to ignore.

     

    R

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 4:37 AM   in reply to bob frost

    Integrated GPUs require drivers and can have driver issues just like graphics cards.

     

    Before you take the graphics card out make sure you have the latest driver for the integrated graphics on your system. In some cases you may need to use the system or MB manufacturer supplied driver, rather than AMD or Intel downloaded driver. You will also need to uninstall the current driver(s) for your graphics card before removal.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 4:47 AM   in reply to Rob Cole

    Rob Cole wrote:

     

    trshaner wrote:

     

    how many people in this forum with poor LR performance have posted, "LR runs much faster now that I installed a new high-end graphics card!"

    The question was: will it run any faster? My theory is that it will, but I can't back that claim with any certainty.

     

     

    Perhaps Bob Frost can provide feedback here after he removes his graphic card and tries LR using the integrated GPU?

     

    I will be installing a new NVIDIA Quadro 600 graphics card in my i7-860 quad core Windows 7 system this morning. My current graphics card is a very low-end NVIDIA GeForce G210 .

     

    PassMark - G3D Mark

    Quadro 600

        683 

    $144.99
    GeForce G210

    178

    $63.13*

     

    The Qudro 600 card is almost 4x the performance of my current GeFroce G210 card. I'll report my results here as well.

     

    Message was edited by: trshaner Added info on card upgrade

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 8:25 AM   in reply to trshaner

    I have completed installation and testing of the NIVIDA Quadro 600 graphics card upgrade from my OEM NVIDIA GeForce G210 card.

     

    Initial testing in LR4.3 with the faster NVIDIA Quadro 600 graphics card exhibited slower overall performance. What the heck is going on? I downloaded the latest NVIDIA drivers (311.15) from the NVIDA website and selected ‘Remove previous versions’ in the install options. I also manually uninstalled the previous NVIDA driver before removing the original card, but did this just to be safe.

     

    The first thought was to check my Eye-One monitor calibration profile, since it probably was no longer associated with the display. Sure enough the profile showing was a generic HP 2509 LCD Monitor profile (HP_2509.icm), but that shouldn’t slow down LR’s display output rendering? I reassigned my Eye-One calibration profile and relaunched LR. The NVIDIA Quadro 600 benchmarks are now exactly the same as with the original NIVIDA GeForce G210 graphics card.

     

    System Specs: LR4.3, Canon 5D MKII raw CR2 image files (21.1Mp), Windows 7 64 bit, 12GB memory, i7-860 Quad Core, 1920x1080 display, two 7K SATA2 HDD with more than 50% free space each.

     

    (All times were measured from mouse-click to when the onscreen image reached final sharpness.)

     

    Library Module (Browsing images)

     

    Fit View:     .25 sec.

    1:1 View:    .50 sec.

    Full-Screen 1.0 sec. (1:1 View)

     

    Develop Module (Browsing images)

     

    Fit View:      2.0 sec.

    1:1 View:     5.0 sec.

    Full-Screen: 7.0 sec. (1:1 View)

     

     

    Develop Module (Adjusting images in Fit View)

     

    Temp Slider:   .50 sec. (LR defaults)

    Temp Slider: 1.0 sec. (Luminance = 20)

    Temp Slider: 1.5 sec. (Luminance = 20, Purple Hue Defringe = 5)

     

    EXECUTIVE SUMMARY:

    The take away from this test is that the graphics adapter performance seems to have little effect on LR’s display rendering performance, but the display profile does.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 10:54 AM   in reply to trshaner

    http://www.adobe.com/content/dam/Adobe/en/products/creativesuite/produ ction/cs6/pdfs/adobe-hardware-performance-whitepaper.pdf

     

    Of course nothing in this white paper concerns Lightroom. From the gitgo Adobe has claimed that Lightroom uses the CPU and they have not changed that.

     

    http://www.videoguys.com/Blog/E/Daves+Monster+Video+Editing+Computer+B uild/0x57b98f4b5bdf4179804211fcc3a6de0c.aspx

     

    This is a Videoguys recommended build of a Windows based video editing computer. The graphics card is a GeForce GTX 680, which they claim is the card

    of choice for DaVinci Resolve editors.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 22, 2013 3:59 PM   in reply to kwdaves

    The OP made no mention of video editing or gaming as part of his system requirements. The GeForce GTX 680 is a great card for both purposes, but the GeForce cards only support a 24 bit display data path. The NIVIDIA Quadro and ATI FirePro graphics cards also support a 30 bit display path, which the OP did mention. These two lines also have drivers that are tailored for graphics acceleration with AutoCAD, Photoshop and other high-end applications (and perhaps LR5?). The drivers and boards are designed for higher stability. Gaming cards are designed for max performance at the sacrifice of some reliability (i.e. the GPU is overclocked by design).

     

    I'm not saying you shouldn't use GeForce cards with LR, just pointing out the differences.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 23, 2013 4:42 AM   in reply to trshaner

    trshaner wrote:

     

    Perhaps Bob Frost can provide feedback here after he removes his graphic card and tries LR using the integrated GPU?

     

     

    Just done it! It's a SZ77R5 Shuttle that I use for lectures, with an i7-3770K IvyBridge cpu, 16GB ram, and an old EVGA 8800GT graphics card.

     

    Took the 8800GT out - lots more room inside now - and using the Intel HD4000 integrated gpu. LR worked fine immediately; if anything a tad faster, but CS6 did not like the Win8 built-in driver and kept 'not responding'. However, after installing the HD4000 driver that Shuttle supply for Win8, CS6 now works fine as well. Not done any detailed comparison, but no obvious slowdowns.

     

    Can't try this on my main desktop, because that is a i7-3930K SandyBridgeE without an integrated gpu. May try a Quadro on that for CS6; the 'old' Radeon 6870 has problems with the advanced GL.

     

    Bob Frost

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 23, 2013 8:27 AM   in reply to bob frost

    Thanks Bob! This confirms that even an integrated GPU that uses system memory shows no signs of slowing down LR. If there was some way to accurately measure the difference it would probably be <1%. Here's my reasoning:

     

    A high resolution monitor (2560x1440) has slightly less than 4Mp and requires about 11 MB of LR processed RGB (24 bit) display data in full-screen mode. Even the older Sandy Bridge platform has a 20GB/sec.system memory bandwidth, not including effect of L3 Cache also used by the intergrated GPU. The time required to write and read 11MB is (11/20000) x 2 = 0.0011 Sec., or about 1 Millisecond. That's a tiny amount of memory overhead for "real-time" display updates when browsing images or making adjustments in the Develp module.

     

    The NVIDIA Quadro line provides good support and stability, but the primary advantage is the 30 bit display path support and AutoCAD support. If you don't need those features one of the newer GeForce cards will provide a more cost-effective solution. Here are the Adobe approved GeForce cards for PS CS6:

     

    nVidia GeForce 8000, 9000, 100, 200, 300, 400, 500, 600 series

    

     

    Quadro 600
      684 
    $144.99
    GeForce GTX 650 
    1,804 
    $108.88

     

    The above GTX 650 has 2.6x the performance of the Quadro 600 and a lower price. But it also requires two card slots, uses 50% more power (64 W. vs 40 W.) and has a maximum operating temperature of 98° C (ouch!).

     

    The svelte Quadro 600 is low-profile, requires only one card slot, and runs cooler. I was quite surprised at the size of this card, which is no larger than the GeForce GT 210 card it replaced. Using the NVIDA Control Panel the GPU operating temperature has never exceeded 50° C in my system. As a design engineer by profession I like it! Your paying for better drivers and a getting a more reliable board design.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 23, 2013 8:40 AM   in reply to trshaner

    Thanks for those comments. I was thinking of getting the quadro 2000D,

    mainly because of the twin DVI outputs and I use two Eizo CGs with 10bit

    hardware and DVI inputs. But I am no expert on this stuff. But it is twice

    the price of the 600.

     

    Bob Frost

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 23, 2013 11:28 AM   in reply to bob frost

    ....and almost twice (1.9x)  the performance:

     

    Quadro 600
           684
      $144.99
    Quadro 2000  
    1,301 
    $399.99

     

    Only DisplayPort interface supports a 30 bit display data path, not DVI. The Quadro 600 will allow you to interface one display with 30 bit support on the Displayport and the other display with 24 bit color on the DVI port.

     

    If you want both displays to support 30 bit color then the Quadro 2000 is the right choice since it has 2- DisplayPort and 1-DVI interfaces. DisplayPort will become the standard interface and it allows interfacing DVI, HDMI, and VGA using inexpensive adapters. Both the Quadro 600 and 2000 come with a DVI to VGA adapter and a DisplayPort to DVI adapter.

     

    This document provides more details on enabling 30 bit dsiplay support with NVIDIA Quadro cards:

     

    http://nvidia.custhelp.com/app/answers/detail/a_id/3049/~/how-to-enabl e-30-bit-color-on-windows-platforms

     

    

     

    Message was edited by: trshaner Added PDF link for 30 bit color info.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 23, 2013 11:37 AM   in reply to trshaner

    Only DisplayPort interface supports a 30 bit display data path, not DVI.

     

    Are you sure on this? I read that dual-link DVI (which the 2000D is)

    supports up to 48bits

     

    "Up to 48 bits per pixel are supported in dual link DVI, and is optional. If

    a mode greater than 24 bits per pixel is desired, the least significant bits

    are sent on the second link."   From Wikipedia

     

    Bob Frost

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 23, 2013 12:06 PM   in reply to bob frost

    Dual link DVI uses two 24 bit data paths running in parallel to double the bandwidth. This allows supporting higher resolution 24 bit displays only.

     

    This is what is listed on page 5 of the NVIDA 30 bit Technology PDF at the link I provided:

     

     

     DisplayPort  DisplayPort allows a mode with 10 bits per component or 30-bit color. This is the approach explained in this paper. 

     

     Dual-link DVI  DVI will not natively support 30-bit deep color. It is possible to enable 30-bit color over DVI using device specific pixel packing method. In this approach, off screen buffers are used for rendering at a deep color value and a fragment shader is used to implement the monitor-specific pixel packing. Since this approach is display specific it will not be covered by this paper.

     

    I personally have never run across any application using higher than 24 bit color over DVI, and no commercial displays I am aware of support more than 24 bit color over DVI.

     

    I used to work for an Intel Embedded Alliance system manufacturer and display support was one of my pet peeves. I used to talk directly with Intel's Embedded Engineering team concerning display support issues (like this) that our customers would encounter.

     

    Don't your Eizo CG displays have DisplayPorts?

     

    Message was edited by: trshaner Added NVIDA 30 bit PDF text.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 23, 2013 12:13 PM   in reply to trshaner

    Dual link DVI uses two 24 bit data paths running in parallel to double the

    bandwidth. This allows supporting higher resolution 24 bit displays only.

     

    OK I think I've understood that now!

     

    Don't your Eizo CG displays have DisplayPorts?

     

    No, they are getting old! Can't afford to keep replacing those. If I dig out

    the manuals, I'll probably find they are 24bit with 30bit internal

    calculations.

     

    Thanks for the info, nice to have an expert around when you need one.

     

    Bob Frost

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 23, 2013 12:32 PM   in reply to trshaner

    OK.  I did some more research. NVIDA Quadro only support 30 bit color over DisplayPort, but AMD/ATI FirePro does supports 30 bit color over DVI with some Eizo displays. One down side to using dual-link DVI with 30 bit color is that the maximum screen resolution at 60 Hz, for widescreen 16:10 ratio is 2,098 × 1,311, for 4:3 ratio of 1,915 × 1,436 pixels, and for 5:4 ratio of 1,854 × 1,483 pixels.

     

    See these two links for more information. I would also contact Eizo Tech Support to see what they recommend for your specific display models.

     

    http://www.amd.com/la/Documents/ATI_FirePro_Adobe_10-Bit_FAQ_030910.pd f

    

     

    http://www.eizo.com/global/support/compatibility/photoshopcs6_nvidia_a md/index.html

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 23, 2013 10:32 PM   in reply to RuisKukka

    I run Lightroom 4 on a 2009 MacBook Pro, and have found that switching the GUP to discreet mode (so that it taps into the GeForce 9600 GT card) causes the fans to ramp up and the machine to get very hot, but doesn't speed up Lightroom in the slightest.

     

    Don't waste your money on an expensive video card.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 24, 2013 12:39 AM   in reply to RuisKukka

    Bob,

     

    Thanks for test, very interesting. What was the resolution of the

    monitor(s) you were using in this test? And how many simultaneosly?

     

    Just one Eizo on the shuttle when at home (a Canon projector on the road).

    Monitor CG222W 1680x1050 . Not sure what the limits of the Intel HD4000 are.

     

    Bob Frost

     
    |
    Mark as:

More Like This

  • Retrieving data ...

Bookmarked By (0)

Answers + Points = Status

  • 10 points awarded for Correct Answers
  • 5 points awarded for Helpful Answers
  • 10,000+ points
  • 1,001-10,000 points
  • 501-1,000 points
  • 5-500 points