I am writing this email on my video editing laptop with its 2.4/3.4 GHz i7-4700HQ four core with hyperthreading, 24 GB of RAM, a Samsung 840 Pro SSD for the OS/Applications and a Samsung T1 USB 3.0 SSD for all the project files and it handles 4K XAVC-S 100 Mbits/second from my FDR-AX100 camera with ease.
I think your path into the Xeon is admirable but extremely expensive.
- You are correct it is CPU plus disk transfer rates and maybe some tuning.
- Your "ideal" Xeon 2 x18 cores plus hyperthreading (2 x 18 x 2 = 72 threads) is not effective with Premiere the second CPU will not work. I would consider any Xeon system overkill hardware-wise.
- What is the model of you motherboard.
- Where are you project files located.
- If you have CC2015.3 loaded have you unchecked the Intel h.264 acceleration
- I think your systems is old but going to a dual Xeon system is unnecessary.
- Have you tried a simple overclock of you CPU?
- Look at one of the newer x99 motherboards and stick with a single CPU system--my personal opinion and hardware.
Here is a frame grab of a 4K sequence with three cameras, one 4k, two 1080 cameras where I played the timeline strating with a camera 3 timeline switching to camera 2 and then a longer play of the 4K camera 1. Notice that the green dot shows 0 frame loss (playback was 2 minutes or 3600 frames of time) and it is set for full resolution playback
cpu's are not designed to decode or encode h264, so it takes a powerful cpu to brute force its way thru the h264 codec. think square peg, round hole. where as video cards or $5-10 h264 decoding chips do this with ease. premiere has a new hardware h264 decoding option, but the webpage also says its only for intel iris graphics, which is usually in laptop cpu's. intel has been pushing this for years, as they are fully aware that the cpu is bad at encoding and decoding. the versions in regular i7 desktop cpu's are normally the intel HD graphics and i think it was only recently that skylake included 4k h264 decode. i haven't seen anyone test the premiere h264 decode option with skylake, but if its true that premiere requires iris graphics, then the skylake HD graphics may not work. the 6 core and higher i7 cpu's and xeon's do not have any intel graphics. so if adobe updates premiere to use intel HD graphics in the future, those cpu's wouldn't work with that option.
so the current options are, a powerful cpu to overcome inefficient decoding of h264, or transcode the footage to a cpu friendly codec such as cineform or dnxhr. cineform has the advantage of being gpu accelerated, so it should help take some workload off the cpu and use the gpu for better performance. you could try this on your current computer if you want to try and avoid an upgrade. premiere has proxy and transcode on ingest options now, so there are several ways to go about transcoding or using proxies...
if you want to avoid transcoding and go for cpu power, then you will want to stick with high clock speeds first and more cores second. that means using an overclocked i7 extreme edition cpu to get the clock speeds at 4ghz+. the i7-6800k is a 6 core, the i7-6900k is an 8 core, and the i7-6950x is a 10 core. the 10 core is priced pretty high, probably to avoid undercutting xeon sales. you could also think of it as a "bad software" tax for adobe's programs. the faster xeon's are usually limited around 3ghz, so even with more cores they can be slower in many scenarios as premiere doesn't always make use of higher core counts properly. ecc memory was a big deal a long time ago, but with current desktop memory being more reliable, its not as critical. the i7 extreme edition cpu's can also use faster memory, like ddr4-3000, to help premiere work a bit better. many forum members are using x99 i7 builds with fast desktop memory.
in this article from puget, they do some testing with various core counts. they end up recommending 8-12 cores for 4k. https://www.pugetsystems.com/labs/articles/Adobe-Premiere-Pro-CC-2015-Multi-Core-Performan ce-Update1-806/
in another puget article, they show the gtx 1070 outperforming all of the last gen maxwell cards and performing somewhat similar to the gtx 1080. so perhaps the gpu is in a similar situation as the cpu core count. the gtx 1070 has less cuda cores vs the 980ti and titan, but the gtx 1000 series have higher clock speeds than the last gen cards. https://www.pugetsystems.com/labs/articles/GTX-1070-and-GTX-1080-Premiere-Pro-Performance- 810/