- The I7-5820K because of the cores, L3 cache size and 28 PCIe lanes.
- On the 5820 system the 970. On the 6600 the 770. The 970 is around 25-30% faster than the 770 because of the architecture. See the bottom of this page: Tweakers Page - What video card to use.
- A single 850 Pro is significantly slower than a m.2 drive, but for media cache & previews you could approach similar speed by using 2 or 3 Samsung 850 Pro 256 GB disks in raid0.
- For precious material that you cannot afford to lose, like media and projects, raid0 is a very risky undertaking, best avoided. Especially with failure rates of around 4% as you say. 4 conventional HDD's in raid0 give a slightly lower performane than a single SSD. About the same transfer rates, but the SSD has a clear latency benefit. You may be better off to use a single Samsung 850 Pro 1 or 2 TB disk for projects and media and use conventional HDD's for very meticulous and regular backups.
Eventually you may want to consider an Areca controller on the 2011-3 platform for a parity raid. In that case, you could opt for 5 Samsung 850 Pro 512GB disks in raid3 (around € 1150 in Europe) to give you 2 TB net space and even better performance than any m.2 drive currently available. That is more attractive than a single 2TB 850 @ € 950.
to add to cc_merchants comments. i would suggest the corsair h110 280mm rad over the h100's 240 if you plan on going for max overclock. if not, consider going with air like noctua d14 or d15 to keep it quieter and avoid risk of leaks. if you can stick with ssd's for raid instead of hdd's, you can start with 1 or 2 for hd material and when the 4k projects come along, just add more ssd's as needed for space and/or speed. you can use motherboard raid (which is software) to keep costs down for now and then go for the add in raid card later if wanting larger raid or better controller. i wouldn't use software raid such as windows storage pools, the benchmarks i've seen for it are very poor.
.....best to use a Samsung 850 Pro SSD for OS, programs, and windows page file only. Then put ALL ELSE on a brand new Samsung 950 Pro PCI SSD, ( media, project files, previews, media cache, cache, and exports ). The performance will be as fast as can be achieved due to the 2 GB/sec read and 1.5 GB/sec write speeds. However, the capacity of the 950 Pro will be limited. This means using the 950 Pro as your " work drive " to achieve best performance with real time playback, rendering previews,or, exporting. When a project is completed you would have to OFFLOAD your valuable files to a large and secure storage location. Of course, previews, media cache, and cache files are easily recreated by PPro, so there is no need to save them. Your valuable and precious original media can be saved along with your project files and exports onto either a single,or, pair of large capacity ENTERPRISE level 7200rpm hard drives. For extra security , you can have two of these drives in a RAID 1 arrangement off the motherboard for " mirroring" to protect your data.
Seagate and other companies make these large drives, which are more reliable than the regular consumer drives. Seagate makes up to 6 TB HDDs like this at a good price, with good performance. The 128 MB cache allows the drive to surpass 200MB/ sec. read and write speeds. Using this system, you can have fast performance and avoid the costly and complicated RAID solution which requires an expensive Areca card and MANY hard drives AND spare drives. Only if you are planning on having to store a tremendous amount of data, would you need a large RAID array solution.
Soon, these PCI SSDs will get even LARGER, and may help eliminate the need for ANY mechanical, spinning hard drives. Currently, the Intel 750 series PCI SSDs offer a larger capacity,but, the performance is lower than the new Samsung 950Pro, which will be NVMe and able to read AND write at the SAME TIME with its "bidirectional" nature.....
First off, thank you to all who replied. JFPhoton, so here is, I guess, what I am concerned about:
I'm used to working with things like an external SAS array and mechanical disks to get multistream 1080 playback - that's always been about getting around the HDD bottlenecks, never really file size (I mean yes, a simple 30-40 minutes project can take up a couple TB in space, but that's not the end of the world). The concern I have about going to putting everything on a 512GB drive would be, when moving to 4K, I'm looking at exponentially larger file sizes - there's a good possibility this will be coming from a Ninja recorder, so we're looking at insanely large ProRes files. For workflow, would this mean moving those insanely large files onto the mass storage, and then work with small, offline versions of them on the m2 drive until the edit's done, then connect to the original files? I came into editing at a time when the file size / storage capabilties meant there was no need to worry about offline vs online, so this would be a new workflow for me, and I'm wondering / worried about complications (I've already run into the Lumetri memory leak - which is also caused by Red Giant's Denoiser II, if anyone hasn't seen that, and it's maddening beyond belief).
if you plan on making smaller proxy files to edit with, you wouldn't need the speed of the pcie m.2 drive. any single sata ssd should be good, depending on the proxy format even a hdd may be plenty.
if you plan on working with the originals to avoid wasting time transcoding and avoid any problems with proxy swapping, then you need to figure out storage capacity and speed requirements. for pcie ssd the speed is insane, so that just leaves capacity. consumer based pcie ssd currently tops at 1.2tb with the intel 750, anything larger is enterprise level and extremely expensive. you could also run multiple intel 750's and manually spread the media across them to get higher capacity. if capacity and/or price is not an option with pcie ssd, that leaves sata ssd in raid. the samsung 850 currently tops at 2tb capacity, so with a raid card its possible to build a pretty large capacity ssd raid that would also get fast speeds.
for complications you may be worried about: prores is a cpu/gpu edit friendly codec, but the large file size can place demand on the drives. changing the playback resolution from full to 1/2 or 1/4 would lower the demands on the cpu/gpu if the computer was struggling. if you were using non edit friendly codecs or any codec premiere pro is struggling with, you may want to transcode those files or if possible record to another codec/format. premiere pro has no built-in proxy file system, they have committed to online editing. offline to online/proxy files can still be done with the old folder swap method or by using the offline/relink command in premiere. the relink tool has gotten better but sometimes has issues relinking all the files automatically, and doesn't work well with picture sequences. while adobe has gotten mainstream attention for its software issues (premiere and other programs), reality is any software application will have bugs and issues (perhaps just not as many). its just a matter of finding the workarounds for any program when bugs interrupt your usage. if you did run into a show stopper issue, you can try using an older version of premiere or switch to another nle. neither option would be able to carry over the premiere project file, only a fcp xml export.
Once again, I thank you for the continuing responses and discussions. I'm going to continue to be "stupid" and ask what may seem like basic questions, but I simply wish to try and synthesize as much information as possible:
1. RAID 3 - I know this is only possible with the Areca controller (at least I haven't seen any other cards with it). Now, if I'm reading the Fuji paper on different RAID levels correctly, RAID 3 offers the speed benefits of RAID 0 and parity of RAID 5 combined. That being said, it seems that with RAID 3, if the parity disk dies you're toast, whereas with RAID 5, it is disk agnostic - any disk can go kaput, and you're fine. Of course, a SSD can do 200 -300 TB or so of writing / rewriting etc (according to TechReport's torture tests on the Samsung series) before one worries about death (which, when thinking about 4K footage), and mechanical drives the worry is mechanical failure (and as I stated above, Hitachi enterprise HDD's have a 4% failure rate over three years). So it just doesn't seem sensible to me to have a RAID level where if your parity disk dies you're toasty?
2. Using H110 - makes sense. 5-10 degree difference, nothing to configure. Install and forget about it, whereas H100i requires configuration. What doesn't make sense to me - it fits the Corsair 500R but not the 400R, but both have the same top ventilation (top on either can accomodate 2x 120mm OR 2x140mm fans, but Corsair's page says the 110 isn't compatible with the 400R - and the only real difference between the two is that the 500R has the massive 200mm side fan).
3. Motherboard RAID - everywhere I have read about it on other forums, it is called "FakeRAID" and generally considered far worse than software RAID. Is this not true in the case of X99 boards?
4. I am totally aware that every major release of an editing program has bugs and problems, and to those who code these complicated prorgams - they always have my thanks. I was a FCP fanboy, all the way through 7, swore by it - and then came FCPX, and I was amongst the chorus of screamers who wanted Apple's head on a pike. They've done a good job improving it since it's original release, I give them a lot of credit for listening to users, but I still hate it - hence the switch to Adobe. (I'll admit I gave Avid a try and wanted to throw it out a window - many I'm a cranky old man in my 30's already?) But the ability to not use Lumetri scopes or the Denoiser filter - man that just freaking hurts.
5. Thank you for clearing up the online / offline issue for me. Holdover from the old HDV days, I guess. You know, I do miss tape and time code, but god do I love quick transcodes. I do always use 1/2 playback until I get to grading, and that's when I turn on the TV and output full resolution to that (spent some bucks on a very nice LCD TV so I can get as close to the final product might look like as possible - a really well calibrated LCD / LED TV is just indespensible, so much more so than a monitor).
6. And finally, a feature request - add a Kodachrome look to the Lumetri presets, pretty please?
Have a look at To Raid or not to Raid. It will give you a better understanding of parity raids and distributed versus dedicated parity.
2. I suggest you change the Corsair water cooling for a much quieter, less costly and equally performing air cooler from Noctua, the NH-D15.
3. Again, read the link given above. Software parity raids are slow, excruciatingly so on rebuilds, mirrored arrays and striped arrays are decent. All carry a noticeable CPU load. Motherboard raids are software raids.
the h110i gt or gtx being newer may be better, i haven't compared all the various h110 models. the software control may be handy to configure or use a silent profile for use while editing. both the 400R and 500R cases say they support 280mm in the top, corsair may have not updated the pages to those older cases when the h110i's came out... so you may have to do some extra checking on online for your case. Carbide 500r + H110i GT ? - The Corsair User Forums this thread suggests it will fit, but also mentions a recall a while back on the h110i gt. i do prefer big air coolers like the noctua's, but if you prefer liquid or want max overlock the h110i should perform better. you will also want a high quality psu closer to 1k watt for cleaner and stabler power for max overclock.
motherboard raid has been frowned upon for a long time. it was unreliable and taxed the cpu heavily, but that has been changing. cpu's now are multiple times faster and the demand placed on it for raid 0 is very little, and intel has improved the reliability and performance of the raid. it is software "fakeraid" and a real raid card would always be preferred, but on a tight budget, it is an option and is better than windows raid. if it turns out the motherboard raid isn't reliable, the raid could be dropped and disks used as single disks. if the speed of raid is still required, then a raid card would have to be purchased and installed.
seems like unix/bsd raid such as zfs is the only really good software raid, but we do not have zfs in windows. microsoft has been trying to copy zfs for well over a decade, and keeps changing its code name as its gone thru many developments and failures. last i checked its called ReFS and was in windows 8.1 in storage spaces, but benchmarks i've seen were very poor and not exactly reliable. this may have changed and been improved in windows 10, but i haven't looked/heard. there are several companies that made software to replace windows storage spaces/drive extender, to take advantage of its failure, programs like drivepool and drive bender and more. there might be a good solution out there but i don't know which one is best. seems like most of them want to just pool drives like a spanned volume instead of striped to gain performance, and have varying levels of protection/correction for raid parity. parity raid tasks do add to a performance hit to the cpu, making software raid with parity more for a file server, and not desirable for a workstation computer in use, such as an editing machine. for raid with parity in a workstation, a dedicated hardware raid card is desirable to handle the extra workload and not tax the cpu.
kodachrome, you can try this free log lut: Everything looks better on KODACHROME – K-Tone LUT | Frank Glencairn
Raid 0 performs the same whether on the onboard or a hardware SAS controller. Only Raid ,5,6,10 perform better on a Raid controller. So it just depends on the type of raid you need or want.