I don't think anyone can give you a definitive answer. It seems that every computer will react differently. It depends on what's in the computer. You'll just have to give it a try. Don't get more than four cores because Lightroom doesn't handle more than four very well. 16 GB of RAM will also help to speed things up.
Quite frankly, it's not... editing in Develop mode is as fast on my Laptop's Intel 7700HQ internal Intel 630 GPU as it is on my Intel 3930K 1080 ti desktop. Lightroom doesn't seem to leverage processing grunt or GPU grunt in any real way it seems.
Scrolling is laggy and slow not matter what the hardware specs are too, which is annoying..
Multi core output processing works well enough, but navigating and editing is now pretty much silly, especially compared to two or three years back when the software seemed way way more responsive.
Me personally, i'm now moving over/back to Capture One Pro. Download the demo and try for yourself. It's snappy and responsive right across the board and its tethering is stupid quick compared to Lightroom. I mean, clients won't be waiting and getting frustrated whilst you shoot live models etc which is how it is now with Lightroom.. silly slow...
I'm not sure whats happened of late, but Lightroom, along with Photoshop have seemingly gone down a wrong turn.. Photoshop again has crazy bugs that slow it, such as the rulers on/lag bug.. i mean, how long have we dealt with that? Turn of the rulers, else suffer...
So, in answer to your question, basically any of your machines will absolutely fly with Capture One Pro, whereas Lightroom won't care what machine you use. I've done testing across three machines now, a high powered Mac desktop, a Macbook Pro and a high powered Windows laptop and they're all the same. Slow and laggy...
I wonder if it'd make a difference if you overlock your desktop CPU? I understand that Lightroom is very much a single-thread bound application
That's not correct. 4 cores/8 virtual cores are very well utilized during normal processing. Overclocking won't help.
There are performance problems, but anyone who looks closer into it will inevitably conclude that it's not CPU-limted. Low spec CPUs will often perform better.
This is a six-year old i5-750, which according to the conventional wisdom here should be severely underpowered for Lightroom CC. This what it looks like when you go completely crazy with the sliders in Develop:
This is my other machine, an i7-3820 by comparison. I'm not at that machine right now, so it's an older Export screenshot:
There's no way you can look at this and say it's CPU-limited.
I just tried Lightroom on a PC with Core i7 7700k cpu along with 1080 TI GPU. It's not too much different than my Macbook Pro with the i7-6920HQ (2.9 GHz, turbo to 3.8 GHz), and it's about the same.
I was hoping for much more performance gain. I tried disabling GPU acceleration, but that only helps marginally. Is the PC supposed to be a lot faster than my Macbook Pro?
but anyone who looks closer into it will inevitably conclude that it's not CPU-limted.
And others see that different types of editing (specifically brushing and spot healing), with large monitors and large original files, as overtaxing the CPU.
dj is right. There is a big dependence on the type of editing you do. If you're somebody who likes to do lots of brushing and who does it with all lens corrections turned on, you will very quickly be taxing the system to its max. If all you generally do is the generic sliders, you won't get there. There is also a strong interaction with monitor size (4K and up really does a number) and the resolution of the raw file.
1 person found this helpful
This is the i5-750 again. That's a mid-range CPU released in 2009, at 2.66 GHz.
The file is a D800 file at 7360 x 4912 pixels. It has lens correction turned on, heavy noise reduction (55), a couple of gradients and 4 adjustment brushes. It has a couple of spot removals.
What you then see here is trying to move as many sliders as possible, as much as possible, over a 30-second period.
Do you still claim that this is CPU-limited?
From the looks of that CPU trace I would say that indeed it is CPU limited. The block shape and the maxing out close to 100% (there are always inefficiencies there making it hard to actually get to 100%) indicates that indeed it is limited by the CPU and not by other factors (disk IO, memory, GPU, etc). Now you might not see advantage from going to a faster CPU as Lightroom is somewhat smart (arguably not smart enough but I digress) about throttling the resolution and rate at which updates are done when scrubbing sliders depending on available compute power.
Then why is this machine slightly faster than the i7-3820, two generations newer, at 3.6 GHz (a GHz faster)?
It just doesn't make any sense to me. The explanation has to be somewhere else.
Why, for instance, is the biggest apparent slowdown a simple switch from one image to the next? That's what really takes time, and that's where the biggest frustration is. Could it be that ACR cache I/O is the real bottleneck here?
1 person found this helpful
My guess would be that the newer machine actually tries to do the updating while scrubbing at a faster rate or higher resolution. Another thing is that it is important to realize that different things could have different limiting factors. Switching between images could be limited by IO while scrubbing sliders in develop could be limited by the CPU. So this is a quite complex equation and you have to define what you are talking about when looking into limitations. Also, when you switch from library to develop, there are multiple IO issues. First is loading the raw file from disk. If the raw is in cache on a fast ssd this should be quite fast. Then the raw file has to be loaded onto the gpu card. This is apparently a major issue leading to th noticeable difference between having graphics card acceleration turned on or off when switching modules. Lastly of course the develop module code is loaded, the images editing history loaded, and the panels drawn. It appears pretty clear that Adobe hasn't optimized this library-> develop step very well as you can see everything getting drawn even on the fastest machines. This should be basically instantaneous with modern systems.