1. BOOT/OS/APPS : 4 x SSD 256GB in RAID 10 for speed and redundancy (giving me 512GB, and read speeds up to 1GB/sec)
You may be hard pressed to find a mobo with 4 SATA 6G ports on the same controller, which is required for a raid10. More common is 2 SATA 6G ports on the Intel controller, and 2 SATA 6G ports on a Marvell controller, which precludes the use of a raid10.
2. RENDER/SCRATCH/CACHE/MEDIA : 4 x SSD (256GB enough?) or 4 x 15K RPM HD (256GB?) in RAID 0 (for speed, speed and speed, giving me 1TB storage, with up to 1.5GB/sec needed?)
Be aware that SSD's have a limited life span when writing to them. After a certain number of writes, they just fail and it is EOS. What kind of controller do you have in mind?
3. STORAGE: 6 x 3TB SAS disks in RAID6 (for read speed, redundancy, giving me 12TB) (or RAID5 giving med slight less redundancy and 15TB)
Why specifically raid6 or even raid5? At most, depending on the controller and the cache you can expect around 500 MB/s transfer rate and that may be a bottleneck with 4K material, especially if you use multicam, chroma keying and multiple tracks.
thank you for your reply.
3. Last first, very few controllers offers RAID3 which I have read you are found of . And I also read there is a debate about RAID3 vs RAID5/6 with cost/performance. I am getting a good deal on DELL systems with it's controllers, hence I am unlikely to follow the RAID3 path with the need for a special controller. Also, the storage system (RAID5/6 will primarily not be used to test playback of 4K footage, the 2. RENDER/SCRATCH system will be used for this), and/or the 1.BOOT/OS system. The storage system (RAID5/6) will still be on a dedicated very fast DELL H810P controller. So I still hope to be able to utilize the speeds of the disks 100% and playback most footage without problems.
2. Yes I know about lifespans of SSD with loss of performance during time.
1. Yes I know about those limitations on most motherboards with embedded controllers. - Thats why the DELL H710p will handle the internal disks:
The H710P is an "Eight-port internal SATA+SAS solution supporting up to 32 SAS or SATA hard-disk drives (HDDs) or solid-state drives (SSDs)" with "Two internal mini-SAS SFF8088 connectors". It supports: "RAID levels 0, 1, 5, 6, RAID spans 10, 50, 60"
Hence the H710P should be more than sufficient to handle my described scenarios with disks!?
My uncertainty comes in about the needed read/write disk-speeds/setups to be able to playback 4K footage.
Another scenario is of course to have a RAID 10 of 8 x SSD 256 disks. The disk speed here would probably push the contoroller/PC to the limits, but I am hoping 4 disks in RAID 0 will be enough for 4K editing/playback, and due to the speed of SSD deterioating over time, I am thinking maybe a 2-4xSSD for boot/OS, and a 4x15K (or 10K RPM) disks for RENDER/SCRATCH/CACHE, and this system does not have to be that large, or have redundancy.. You agree??
What codec or format do you plan on dealing with the 4K media in? That ultimately will decide the drive configuration. A 2 drive raid 0 with SSD can normally handle most of the 4K playback options fine. However you would have to wipe the arrays periodically as maintenance to maintain ideal performance with them. A 8 drive raid 5 would handle all 4K playback options whether Red or DPX if you have the right SAS controller. That obviously will be the more expensive solution.
absolutely no need to create any raid for the OS thats what an imaging program is for..
and certainly not on with multiple ssd.. no benefit to Adobe period.
4k is best served with 8 drives in a raid 5 but you had better have a some serious horsepower to push that 4K..
To be honest, I am not entirely sure. But most likely Cineform 4K for the most 4K videos (3840x2160). BUT, I also have people asking for QuickTime 4K, which I think must be ProRes 4K. I would also like to be able to do DPX, and am also looking into the new x264pro codec: http://www.x264pro.com/?page_id=10
Also, I will be outputting HQ stills in 4Kx4K 24p for circular-dome-cinema usage, and this is done by rendering out HQ stills. So no codec needed for rendering, but will need to be able to preview footage playback after all effects and corrections are applied, preferably in full resolution. And since I do not know of any codec allowing 4K vertical I am thinking I must be able to playback lossless, which will probably be impossible? But I guess I should be able to work with/edit/preview 2K x 2K, before rendering out 4K x 4K stills as a second choice.
Well the 2 SSD drive raid 0 would handle those. However the player processing may not. I also dont think your going to be able to export those resolutions outside of a compositor either. I believe Eyeon Fusion can handle that and you may try Blender to see. However that is going to be a toss up.
Ole, what's the bit depth of your 4K frames?
4Kx4K x 24-bit x 24p is already at over 1GB/s if I am not mistaken. Adding an alpha channel or increasing color depth will bring nearly any RAID to its knees. You may need one of those PCIe SSD speed demons and a well tuned system to get sufficient speed for realtime playback - and then there is a question of how you'll be able to view / monitor the full 4K image - I am not aware of any full 4Kx4K monitors short of NHK 8K projection ($$$). And then, as Eric said, there may be issues with the software (player) ability to push that kind of bandwidth out.
Hence the H710P should be more than sufficient to handle my described scenarios with disks!?
Probably an LSI controller with Dell-flavored firmware (anyone knows?) tuned for everything (IOPS, low latencies, error correction, etc.) except bandwidth. You may want to see if anyone managed to squeeze decent bandwidth out of it, before committing to it - or just get an ATTO R600-series iron. (Don't recommend Areca 1882 series because of this.)
A 2 Drive raid 0 with SSD's will handle the playback of those 4K files since the latency on the SSD drives is so low. In raid you get well over a 1GB/s with SSD's. The Player though is really going to be the question. I know Cineform's player had issues before with playback performance and 4K files. This might be one you have to run trial's on to see which will handle those files.
Alex, no need to calculate with 24bit. 8bit will more enough for this project where image is displayed through projectors. Not even Blu-Ray players or HD displays are capable of anything more than 8bit. I will have the chance to playback this specific project in a dome theatre where I live, where their propritare pc system playback stills. I realize there are no monitors capable of this, and I realize it will be hard to be able to preview 4K x 4K on my computer, but 2K x 2K should be possible.
Yes the DELL H710P is based on the LSI 2208 and powered by two PowerPC cpus (6Gb/s)
And I have not read any reviews of it yet. Will try to google.
I am not sure about the H810, which will handle my RAID 5 in an external case (storage/playback) through SAS/JBOD, but I think it is LSI based as well.
I realize I am getting close to a point now where I can not eliminate every risk, and I just have to order the machine and see how things turn out.
I am thinking this combination now:
1. 4 x SSD for boot/OS (RAID 10)
2. 2 x SSD for render/playback/scratch (RAID 0) / (or PCI SSD card eg. OCZ Revodrive 3 X2)
3. 8 x SAS 3TB for storage/playback (RAID 5)
Or possibly replacing no.2 (render/playback/cache) with a SSD PCI speed dameon card like Alex mentions. I have allready looked at the iOFX and the OCZ Revodrive:
why do you keep insisting on the OS in a raid array? again this offers no performance advantage
secondly a raid 10 is not real redundancy. no better than raid 1, raid 5 would be.
lastly no point to having redundancy on the OS better to make frequent images to an external drive/dvds this is best for any potential disasters (lightning)
Alex, no need to calculate with 24bit. 8bit will more enough for this project where image is displayed through projectors.
24-bit is not per color component, it's a total number of bits for each pixel. (4:4:4 with 8 bits per YUV or RGB component.)
So if my calculations are right, your full 4Kx4K 24p stream will be in the neighborhood of 1.1GB/s, achievable with 12-16 mechanical drives and a decent controller.
my idea with RAID on the OS/APP disk system is boot speed, startup of applications, general system responsiveness and redundancy, hence the choice of RAID10.
But I understand this will not give me any particular performance gain in my work with Adobe After Effects and/or Premiere Pro.
From my earlier days I am used to using Acronis Trueimage making regular backups on DVDs and external hard-drives of the boot/OS-drive, and I was hoping not having to do this anymore, and to gain some speed and convenience in case of a failure.
1. 4 x SSD for boot/OS (RAID 10)
2.2 x SSD for render/playback/scratch (RAID 0) / (or PCI SSD card eg. OCZ Revodrive 3 X2)
3. 8 x SAS 3TB for storage/playback (RAID 5)
You realize that
Solution 1 requires either 4 SATA ports on the motherboard on the same controller or 4 ports on the LSI/Dell controller.
Solution 2 requires two identical SATA ports or 2 ports on the raid controller, and
Solution 3 requires 8 ports on the LSI/Dell contrller.
I rather doubt the mobo has that many SATA 6G ports on the mobo with one controller and am pretty convinced that the LSI/Dell controller has no more than 8 ports. This makes your solutions rather unachievable, apart from costly with 4 SSD's in raid10.
The DELL H710P (based on the LSI2208 dual ROC chips) has 8 ports of SAS/SATA6Gbps This is controller which will be used for the internal devices solution 1 and 2.http://www.dell.com/downloads/global/products/pvaul/en/dell-perc-h710p-spec-sheet.pdf
Solution 3, the storage/playback disk system (SAS), will be stored externally in another case, connected to and controlled by another separate 8-port SAS/SATA6Gbps RAID controller, the DELL H810, with JBOD, RAID5. http://www.dell.com/downloads/global/products/pvaul/en/dell-perc-h810-spec-sheet.pdf
From what I can tell, both controllers are PCIe 3.0 x8 (8GBps max on bus) (Edit: they are PCIe 2.0 x8, meaning a theoretical max bandwidth of 32Gbps/or 4GBps.)
So If I am not mistaken, I think the controllers should be able to do my job as described in my last reply, and I am assuming doing them well.
However, I am now strongly considering the RAID/SSD PCIE card; OCZ RevoDrive 3 X2, for solution 2, which should be capable of 1500MBps read / 1250MBps write, as a render/playback/cache disk. I realize these are theoretical values, but from what I can tell, this solution really seem to deliver.
If I choose this disk/RAIDcard I do not have to use the controller as it is connected directly on the mobo.
I am a little unsure though how 1.5GBps can be achieved on a PCIe 2.0 x4 bus, which this card uses.
(Update: from what I can tell, PCIe 2.0 is 512MBps pr lane/link, hence x4 is 2GBps max capacity on bus, which should be enough for the Revo 3 X2)
Harm do you agree with using 4 SSD in RAID10 for boot/OS (keeping the OS on a separate system)? (even if it is expensive), or should I just run both the OS/APPS on the OCZ RevoDrive if I go for it?
I am not sure, but according to others here, a RAID5 with 8 drives will give you a very high performance. Are you sure 8 drives with a speed of 150-160MBps will give you "only" max 500MBps in RAID5 on a good controller? Is there like a max 5x drive speed limitation in RAID5?
I am still considering RAID6 for the extra safety of two parity drives, how much worse would performance be on a RAID6 system you think compared to RAID5? If read speed is about the same as RAID5, and write speed comparable with a good controller I am happily sacrificing two drives for added redundancy.
From other articles I've read, it seem like RAID6 should be equal to RAID5 in performance, on a good hardware controller.
From what I can tell about RAID 30, it looks both expensive and complicated. I will probably run 8 x 3TB SAS drives (24GB), so it should be possible for me to run RAID 30?, but then I will have to buy another controller than the H810 which I would rather not.
I have a good deal on the DELL T7600 with the DELL H810 controller, which supports RAID 0,1,5,6,10,50,60, it is 8 port, capable of up to 3-4GBytes/s.
But, If I were to consider RAID30, how much of the 24GB would be left (18GB?) usable and how many drives would be used for parity (2?). How would the read/write speed be compared to RAID5?
I can not seem to find a RAID 30,50 or 60 calculator, so hard for me to do the maths.
For me RAID 50 seem to vulnerable, and RAID 60 to costly (+loosing too much storage).
Edit: I am thinking if I can achieve 6-700MBps on a 8-disc RAID6 system it will be good enough for storage/read. In particular when I have a dedicated disk-system (4XSSD in RAID0) for playback of 4K/uncompressed, which I am hoping will give me about 2GBps read speeds.
I think this article summed it up pretty good:
yes now I have, and I see that some questions could have been found by searching more, sorry.
Anyway, assuming your table is correct and indicative of speeds etc, I get the following results, based on me planning to use 8 x Hitachi Ultrastar 7K3000 SAS (152MB/sec) in an external case/JBOD:
SUST.SPEED CAPACITY REBUILD RAID0 1155,2 24 RAID1 577,6 12 RAID3 904,4 21 RAID5 851,2 21 RAID6 729,6 18 RAID30 775,2 18 RAID50 729,6 18 RAID60 486,4 12
And, now you got me looking at the Areca 1882
I am guessing this would be the card I would need to replace the DELL H810 if I would want RAID3 or 30?:
But, It will still be a matter of cost for me, and I am on the very edge as it is, and as I've said, I am getting a very good deal on the complete DELL package, which of course is decisive. Also I am thinking I want the extra protection in an extra parity drive in RAID6 vs 5. Meaning I should think the same on RAID30 vs 3.
Also, in RAID3 I assume you would survice a crash on either the parity disk, or a crash on one of the data disks, but not both a data disk and the parity disk.
In RAID30 I guess I could survice a crash on up to any 3 disks? In a RAID6 system however, in theory, you could survice a crash on any 2 disks, if the remaning disks survive the rebuild process!? So the protection in RAID6 is better than RAID3 (slightly), and protection in RAID30 is better than RAID6, but I am thinking protection in RAID6 is good enough (2 arbitrary drives). RAID30 is faster than RAID6, but the speed difference, from your table, is not big. (Please excuse and correct me if I am thinking/assuming wrongly here, I am a newbie on RAID)
So decisive then could be rebuild times. I find it a bit difficult to calculate rebuild times, so maybe you could comment on that (RAID3 vs RAID5) (RAID30 vs RAID6).
Also, I will be using a separate disk system for render/scratch/playback/edit. One reason for this is that I want to be able to, if possible, to playback 4K x 4K footage, but at least low compressed 4K (3840x2160). And from answers given from others here, this will be very demanding. This system will now most likely be a 4-6 x OCZ Vertex 3 120GB SSD IOPS in RAID0 (480-720GB). This will be controlled by the DELL H710P, which should in theory give me read speeds about 2-3GByte/s on the DELL H710P.
Meaning, when I am working on a project, I will make temporarily copies on the render/edit/scratch disk of files, if needed.
I will even probably go for 128GB of RAM to allow me to make a RAM-DISK of 20-30GB for editing/temp/import.
So taking my other aspects into consideration, I am still a little unsure if to pursue the RAID30, and still leaning towards RAID6.
Although I will admit you have now got my attention on the Areca/RAID3/30!
One last aspect, it will be difficult for me to choose other hard-drives for the JBOD, so that route is allready done.
1 person found this helpful
yes now I have, and I see that some questions could have been found by searching more, sorry
No need to apologize, I have posted some more here in the course of time and cannot expect everyone to have read them all or even a decent portion of them. I pointed you to them, thinking they may give you further info to help you decide.
You have some strong arguments for the two Dell/LSI raid controllers, cost being the major one. The only drawback with these controllers, in comparison to - mainly - the Areca 1882 is that the Areca has expandable cache memory. For me that was one of the specific reasons the get the Areca instead of using Areca, in addition to the support of Raid3/30 and having 24/28 ports.
With your intended use of two Dell/LSI raid controllers, I wonder if there are models with more ports. I know LSI have models with more ports, and the price difference between the various brands with a larger number of ports is pretty small, whether you opt for LSI, 3Ware or Areca, but all the models with more ports give you more expansions options for the future. If you intend to edit a lot of 4K x 4K material, you will likely need all the bandwidth you can get and maybe 8 disks is still too limited.
For that reason - and disregarding the choice of raid 3/5/6/30/50/60 for the moment - I think it may be worthwhile to look at controllers with even more ports than 8 (each). I know I started with 2 ports, then progressed to 4, then to 12 and now to 24/28, but supplemented with 4 GB cache memory and a BBM (battery backup module)
You have read my articles, so you know that you can migrate your raid arrays upwards, so from 3 to 5 to 6 or from 30 to 50 or 60, but not downwards. Once you have a raid 5 or 50, you cannot migrate back to 3 or 30, only upwards to 6 or 60. This migration path is something to keep in mind when making your choice.
You specifically mention OCZ Vertex3. If you already have these SSD's, OK. If you don't have them yet and have to order them, I suggest you have a close look at Corsair Performance Pro, Plextor M5 Pro or Samsung 840 SSD's as alternatives. OCZ Vertex3 does not have a good reputation with quality, reliability and speed and their 'stable state' degradation is quite sizable and way more than the alternatives. Remember however that 'trim' does not work with SSD's in raid arrays with current drivers and they all have a finite number of writes before dying.
My suggestion would be to start out with the best raid controller you can afford with a large number of ports to start with, attach the 8 intended disks in an array of your choice, whether 3 or 5 or 6, and if you find the bandwidth is still lacking, to add more disks and expand your array.
In my own setup with 3 x 7-disk raid3 striped to a 21 disk Raid30 volume of only 18 TB, because each R3 has one parity disk, I can have each R3 array lose a single disk without problems and for security I have added 3 global hot-spares to take over from any failed disk. In total it gives me 6 disks safety for failures out of 24 and the loss of net capacity of course.
Hope this helps.