1 person found this helpful
Your starting point is quite good, despite it being a 2.5 year old system. But the fact that you mention Caviar Blue, shows me you need to do some more reading. If you click on the Overview tab at the top of the forum, you will find a number of FAQ articles that may get you started on topics like what disks to use, whether raids will help you, and all kind of stuff like that.
Considering that you want to use material from a 5D and from an Epic-X, you better be prepared for a lot more than $ 800 - 1000. Your CPU will be taxed to the extreme, your RAM memory is not enough, your Quadro 3800 is better exchanged for a more capable card and you will need a much better disk I/O system.
Reading the material you want to edit and what you currently have, I would start looking at a system along these lines:
GTX 570 video card
32 GB DDR3-1600+ memory
Dedicated raid controller
As many disks as you can afford (not easy with current day prices).
Once you have read more about these issues, we are glad to help when you have specific questions.
Thanks Harm for your advice, your comments in previous forums have definitely helped.
I actually have researcched my new system build core, the problem was when I got to storage, I didnt know what was the best route to go, hency my call for advice. My core is below:
i7-2600 (assuming I can wait a month or two for the 3930 to be available)
Asus P8Z68 (if processor stays the same)
32 GB Ram DDR3-1333 (assuming I can get 8gb sticks)
So my question is based on a $1000 budget, what would be a decent (in terms of reliability and brand) raid controller AND would any HDD's work or should I go for enterprise drives??
Why not select a GTX570. Almost the same kick, but $200 less.
If you have some room in your case, consider buying 2TB or 3 TB drives, and you can purchase an inexpensive RAID card for ESata and build your own RAID internally. RAID cards don't have to be hundreds of dollars. You can buy them from about $40 bucks, and most of the time they will do just fine.
If your case is full, I have had good luck with a GTECH 4TB external, connected via ESata, once again the card is relatively inexpensive under $50.00 for sure,
So, you have a few choices. If you have lots of cash, following Harm's recomendations are fine, but if you want to stick to your budget, you can try either of my solutions
I completely disagree with your advise about a $ 40 raid card for eSATA. That is a complete waste of money. If you want to have a software aid0, use the on-board capabilities and accept the load on the CPU. A software raid card for eSATA is even slower than on-board but costs $$, even a meager $ 40, without any benefit. On the contrary, it has only disadvantages. If you want a parity raid, a software controller can easily bring the system to its knees, because of the significant CPU load. In addition, these software cards always lack sufficient ports for a serious parity raid.
A GTech external over eSATA is dead slow and no faster than a single disk over eSATA, because it is not a full duplex connection. It may even be slower than a single disk when used with a raid5 array with distributed parity. The only reason to use such a device is when a 3 TB single disk is not enough storage.
With file based ingest, the need for parity raids has increased significantly and the use of aid0 is very hazardous, unless you have a very good backup policy and live by it, but that requires investing in additional disks for those backups, either a number of single backup disks or a NAS. Even if you have a good daily backup schedule, it still means you can't use your memory card from your camera until you have made a backup and that may entail investing in additional memory cards as well.
Harm, I owe heaps and heaps to the advice Bill and yourself provide on this forum, but I am conscious that your advice includes the importance of building a balanced system. When I first read the OP’s list of components, I wondered whether a GTX 570 would make more sense than a GTX 580 with the rest of the components. I also wondered if a dedicated raid controller would be overkill. They are not cheap!!!! Looking at the PPBM5results, the current forth through to eighth placed systems are using on-board raid0, and while their disk I/O times are a wee bit behind, I am not convinced that the difference is worth >$1000. OK it is different for other than raid0, but raid0 represents by far the most common configuration in the PPBM5 results table.
Wow, I definitely have been missing out, I just checked out the PPBM5 page for the first time and am quite enamored at such a resource specifically geared toward PPro users.
Anyway, I too noticed that a good number of the top 20 systems used AID0/on board configurations.
Having said that, I was wondering if the following is an efficient method of backing up if i went with a AID0- My project drive (basically all my files for my current project including source footage, music, graphics, etc) would be 2 1TB 7200 AID0's. Then I would have a 2TB Sata 7200 as a backup drive. Before I edit, I would copy my project drive on to the backup drive. Once completed, all I would do (assuming I didn't add any source files to the project drive) is just replace the project .ppj file every 20 minutes???
If not, what would be the most efficient method to backup a AID0, assuming I didn't want to go the RAID10/RAID 0+1 route??
And for Harm as well, how much more performance will a dedicated controller be over the onboard controller, to justify the $450+ price tag?? I am willing to shell out the money, especially if it will dramatically increase the performance of my AID0's or whatever RAID i end up doing. It's just I was surprised (and oddly comforted) to see how common the AID0/on board configs were on the PPBM5 link.
Finally, can any one clear up whether most people (in regards to setting up RAIDs for video editing) use decent HDDs or enterprise HDs??
Thanks in advance.
1 person found this helpful
Regarding decent or enterprise HDs for RAID:
- enterprise drives do not support RAID 0, sometimes called "non-RAID" or "AID" (without the R which equals redundancy), unless you switch off the typical enterprise feature that drops out the drive from service if it is non-responsive after a configured time, say 7 seconds
- hardware RAID with parity (RAID 5, 6, etc) is best done with a high-quality controller card (generally $500 and up) and while many manufacturers only certify them with enterprise drives, many users (myself included) are running them very successfully with proven, high-quality 7200 SATA 2 or 3 drives
Regarding where you sit right now:
- any money spent on rotating drives right now has a VERY high penality due to the floods in Thailand and possibly other places in the world that impacted manufacturing
- Epic-X (5K!?) is pretty intense stuff and will be pushing your hardware pretty hard
- as Harm pointed out, your current system is not exactly weak
If I were in your shoes now, I would grow your current system in ways that will really increase speed now, and it ways that will minimize "throw-away" in the future:
1) case - if you already have a case with good air flow, lots of drive capacity, and quiet enough for your taste, great; if not replace it now
2) power supply - like the case, if what you have is powerful enough for a high-end editing system (850 watts or more), and quiet enough for your taste (more watts = less noise at a given duty level), then keep yours, if not replace; note I use a AX1200 power supply which is more watts than required, but it does have a large fan and large heat sinks that keep it quiet when the system is working hard
3) video card - GTX 580 may be more than other choices, but as RED media is so taxing, I would go with that card or better (I don't know what nVidia is up to, but is does seem to be about time for newer, faster GPUs to be out soon, if they are not already)
4) RAID controller - choose a good 8 or more channel card (Areca, Intel) that will serve your needs for several years and add the battery backup option
5) Get a 120 or 128GB Intel or Crucial current generation SSD boot/programs drive
6) Put all 3 1TB drives in a RAID 0 configuration (assuming that they match) on the new controller card, or find another matching new or used 1TB drive and make it RAID 5
7) Buy a 3TB 7200 drive for backups (price of drives seems to have hurt 1TB and 2TB choices more, so I think 3TB's are better value, and they also tend to be faster); I like using a hot-swap case mounted drive bay for backups to motherboard or RAID card ports (I cold boot when add or remove the backup drive)
8) Replace your RAM wtih 24GB of 1600MHz RAM that RAM vendor says will work in your motherboard (what is the vendor/model BTW?); this is very inexpensive to do now; I just paid $129 for a matched 24GB set of 1600 sticks for X58 from G-Skill for a new Gigabyte board I'll be building out
9) Get a good CPU cooler and overclock your 920 to about 3.7 GHz, or more if you want to spend a lot of time tweaking; while it would start becoming "throw-away" expense, a 6-core X58 CPU would also be a reasonable choice for you to make now to get another year or 1-1/2 years out of X58 before a totally new CPU/motherboard that would likely blow-away the currently available X79 CPU options (i.e. 8-core die with only 6-cores available for use!)
Regarding the justification for the big outlay for a dedicated controller:
- allow for the speed of RAID 0 and the data loss protection provided by the ability for a drive to fail and work (and data backups) to continue
- very portable from one PC to another (and PC bios settings cannot mess up your RAID or your data)
- hardware RAID controllers allow access for drive SMART (internal drive level diagnostics) to be read from each drive without breaking the array
- backup battery prevents data loss in additional ways, and even when you already have a system UPS
- generally allow for larger RAID arrays, although some motherboards are coming with massive SATA port counts now
You are absolutely correct that a large proportion of the top-performers use (large) Aid0 arrays. I wonder if the reported disk setups are indeed used during everyday editing, because using 6 or 8 disks in an Aid0 entails huge risks of losing all your data. I think the question is justified whether these configurations reflect everyday editing, or were only used for bragging rights. And this may also include the 3 disk configurations. Let me reiterate that the purpose of the PPBM5 benchmark is not about bragging rights, but about stable, reliable, everyday editing configurations that work.
To go back in history, talking about the Seagate 7200.11, just the other day the last out of 7 of these disks died, so I have had a failure rate of 100% within three years and now notice that Seagate has reduced the warranty period to one year only for new disks.
I'll come back to your raid question, but first look at #1, where I suggested the 570, just like you thought. The 580 seems overkill.
Raids are a difficult topic. For many it is unknown territory and yes, the cost is huge for a good raid controller. Why would people still consider it? IMO there are two major reasons for investing in an expensive card and the necessary disks and two minor ones.
Major advantages are the speed and the protection against data loss.
Speed is only relevant for those editors that use multi-cam, or multiple tracks, or high quality, high resolution source material. If one only uses medium quality, like AVCHD, material (4:2:0) or a few tracks, speed is not the overriding factor. But if you use 4:2:2 material @ 50 Mbps or more or use Red/Epic 4K or 5K material and use multiple tracks and multi-cam, then speed becomes of the essence.
The second aspect is protection against data loss. Protection can be bought by diligent backup policies and adhering to those policies, but that costs production time. You can't afford to make a backup tonight when working with tape-less workflow's, you have to create a backup immediately after ingest and that costs time and time is $$$. A parity raid array protects you from disk failure, so you can continue working even if one or more drives fail and you can make your backup during the night, without losing production time.
A minor advantage is the availability of extra SATA connections when you run out of those on the standard motherboard and the second one is that you do not experience the performance degradation when a disk fills up, like single disks have.
Whether these arguments are worth the cost of a dedicated raid controller, everybody has to decide, but in my case the clear answer is YES.
Let's say that each backup will take 15 minutes after you ingested new material and your hourly rate is $ 60, that means that each backup costs you at least $ 15. Not talking about restore times, you will have earned back your $ 900 raid controller in 60 days, you will have better response times during your editing sessions and you have piece-of-mind that nothing can go wrong, apart from a bolt-from-heaven, terrorist attacks, hurricanes, flooding and other circumstances that are excluded from your insurance anyway.
Have a look at my reply # 8 to Trevor. It may help you decide.
I also like to draw your attention to a new article I wrote (was about time after nearly two years!) on Adobe Forums: What PC to build? An update...
Jim, thanks for your insightful response, it really got me to re-think my current spec setup. While it does make more sense to max out my current system, one of the reasons im building another workstation is because a good portion of my work involves tight deadline situations, and i've had plenty of instances where my computer was tied up doing some intense Afterfx renders and I would just sit there and do nothing but wish I had a second workstation to edit on while i waited. Plus after discovering this forum and the PPBM site, i'm actually looking forward to tweaking my old system and compare it with my new.
Based on yours and Harm's advice, when you have time, please let me know what you think of my new approach to budget storage. One of the reasons I'm trying to keep the storage side of my system to $1000 is because of the fact, im saving up for the Epic-X. Having said that most if not all of my source footage is 5D .h264 files converted to Cineform .avi's. Even in AfterFx I render out to Cineform files.
The following Storage setup is assuming i'm using the i7 3930k/x79 core setup. And while I originally wanted storage for under $800, I think you guys are right about using a discrete RAID controller so my budget is about $1000
LSI MegaRAID SATA $329.00
Crucial C300 64Gb x 2 AID0 $248.00 ($124 ea) ---OS/Programs
Seagate Barracuda 7200 SATA3 1TB x 2 AID0 $258.00 ($129ea) --Project/scratch (none of my projects have ever gone past 1TB, my thinking was just to load in my current projects to this drive, and when im done remove it to a backup)
Seagate Barracude 7200 SATA3 2TB $239.00 --Backup files/media file
A couple of questions based on your answer and this current setup.
- 1.) Can you list some "proven, high-quality" drives?? Would the barracudas apply?? the WD blacks??
- 2.) When you mention "choose a good 8 or more channel card" you mean PCI x8 right?? (sorry, im still learning my way though this)
- 3.) Im assuming SSD's in RAID configuration still isnt optimal with video right??
Thanks again, marc
Hi Harm, very informative article, thanks for the link, I had a couple more questions for you.
1.) You mentioned in another thread that the 1155 socket boards wont be able utilize both a 580gtx and dedicated controller without the video card going back down to pcie x8. Would the ASUS Maximus IV Extreme-Z solve this? Under the details there's this "4 (x16 or dual x8 or x8, x16, x16)" Am i interpreting that line correctly??? If I went the Raid Controller route, and if n LGA 1155 socket can handle the Video card being at x16 while the Raid controller is at x8, then its best to wait till the 3930k comes out right???
2.) I tried researching this today. But what exactly happens if a RAID 5 or a RAID10 fails (1 of the drives fail) during an edit?? Can I still continue editing??? Or do i have to wait for the drives to rebuild itself with a replaced drive?? Exactly how long does this take usually (I know its probably correlated to how much data is in it) but are we talking hours, minutes??
3.) Oh and when you have time, can you look at my spec Storage setup (trying to be within my $1000 budget) and share some advice if need.
- The 1155 platform is limited to 16 available PCIe lanes. No more, no less. Manufacturers add various chips to expand that number, but conveniently forget to tell you that those extra PCIe lanes are shared. It boils down to the fact that even with extra chips on the mobo to artificially increase the number of PCIe lanes, it does no good, because all the info over the PCIe bus is ultimately compressed to 16 lanes. Like a 6 lane highway that at some point gets reduced to 3 lanes. You know, the spot where traffic jams occur each day. It is a simple architectural limitation by Intel. The 2011 platform OTOH has 40 PCIe lanes available, so is much better suited for raid controller use and other PCIe cards.
- You have to make a distinction between mirrored arrays (raid1) and parity arrays (raid3/5/6), even when they are striped (raid10 or raid30/50/60). When a disk fails in a mirrored array, it is a matter of simply copying the data from the mirror to the failed disk. That is pretty fast, since there is nothing else to do than a simple copy of data. With parity raids it is a different matter altogether. With a distributed parity raid, like raid 5 or 6, rebuilding the array entails reading and writing to all (n) disks, not just to copy data, but also to regenerate the parity info on all disks. This can take up to several hours, during which time you will notice sluggish disk performance, but you can continue working. On an array with dedicated parity, in essence the work is more or less the same, unless you are lucky enough that it is the parity disk that failed. In that case only one disk needs to be rewritten based on reading (n-1) data disks and that is pretty quick, although slower than the copy of a mirrored raid, because of the parity calculations. If one of the data disks failed, it means reading from all disks in the array and writing to (n-1) disks, so the rebuild of raid3 is faster than raid5/6. Still you will notice sluggishness on a degraded array.
- WD Black is not advised in parity raids because of the WDTLER limitation. Personally I'm not fond of Seagate, because in the past years I had 7 out of 7 Seagates fail. A 100% success rate . Admitted, they were from the notorious 7200.11 batch, but still 5 had already been exchanged under warranty, so one could say I had 12 failures out of 7 drives, that is not very good. Hitachi 7K3000 series on the other hand are often among the approved list of disks from raid controller manufacturers and lastly I have had great results with the Samsung Spinpoint F1 and F3 series in parity raids. Good raid controllers all use PCIe 8x slots, which makes them not really suitable for the 1155 platform as noted under 1. Good brands are LSI and Areca, especially the incrementally improved Areca ARC-1882iX-16/24, while we are still waiting for the new PCIe 3.0 Areca line, expected around Q2/2012.
Areca has one huge disadvantage, it is costly, but several benefits: It is the Rolls-Royce under the raid controllers, the only one to offer raid3 capabilities and the only one that allows expanding the cache memory up to 4 GB, where others are limited to 512 MB.
Short history: When I first started using raids, back in the dark ages, I started out with a two channel Promise card. It was cheap, but as I soon realized, also crap. So I upgraded to a four channel 3Ware card, but that proved to be too limiting, because of only 4 channels. So I upgraded to an Areca ARC-1680iX-12, thinking 12 channels would suffice. Now I'm awaiting the arrival of the new PCIe 3.0 line of controllers, but will opt for the 24 port model (plus 4/8 ports over SFF-8088).
Regarding your questions:
1.) Can you list some "proven, high-quality" drives?? Would the barracudas apply?? the WD blacks??
Sorry, but not really. I can only say stay far away from Seagates that end in .11; they are definitely flawed. I personally use: WD's Blacks and RE3s and Hitachi 7k1000.c and 7k3000.
2.) When you mention "choose a good 8 or more channel card" you mean PCI x8 right?? (sorry, im still learning my way though this)
No, I meant connections for 8 or more drives.
3.) Im assuming SSD's in RAID configuration still isnt optimal with video right??
Harm, you get probably get this plenty, but again thanks for your time in giving us (not so tech savvy) filmmakers an education that we couldn't get anywhere else.
I've read your RAID post in the other thread, and am definitely looking forward to your RAID article (btw, was it a typo under RAID 10 Storage- instead of n/4 shouldnt it be n/2??)
When you get a chance can you answer the following questions.
1.) Based on your chart and other research here and on the net, would the following be accurate in regards to spec RAID storage solutions as relates to my budget. Right now i can only afford 4 WD 500Gb Enterprise drives ($209.99) unless anyone can recommend cheaper but just as effective drives (in a RAID), so the list is based on that. Just for clarification the % is performance gain, and the () are assuming just 1 drive failed... also its based on a 3930k/x79/32GbRAM/GTX580 system with a dedicated RAID controller w/512cache.. and my footage is primarly 5D .h264 converted to cineform neo
2 aid0 = 190% 1TB Disk Copy (data lost)
4 aid0= 380% 2TB Disk Copy (data lost)
4 raid3 = 255% 1.5TB (wait until rebuild, unless parity drive fails)
4raid5 = 240% 1.5TB (slower rebuild than RAID3 because parity is on all drives)
4 aid10=190% 1TB (no rebuilding, quick)
2.) In regards to RAID 3 and 5, what would be VAGUE estimation in rebuilding times? Since I have not worked with RAIDs that failed and were immediately replaced, I would like to get an idea for when/if a situation rises during a tight deadline. For example a 500Gb project, with many layers, fx etc. are we talking 1 hour, 3 hours 8 hours???
3.) And as you mentioned earlier, provided just 1 drive failed, I would be able to edit with sluggish performance, would this be accurate with both RAID 3 and 5? I'm assuming, if I need the array to rebuild, I would have to wait and not do anything on the computer right?
4.) Are my assumptions about AID0/10 correct?? Say 1 drive failed in the main array, I could just continue editing on the mirrors, with very little dowtime (less than 30minutes)??
5.) If I just went with an AID0 with either 2 or 4 HDD, would it make a difference if I went w/ a discrete controller vs on board?
6.) Final question (sorry), is the Areca ARC-1213-4I decent for my setup??? I'm trying to keep the storage as close to $1000.00 as possible.
Again, Thanks Harm in advance for any guidance on this, you ROCK!!!!
I can give some times for RAID initializations and rebuilds.
I have an Areca 1880ix-12 with standard 1GB memory on the card, running an 8-bay tower of 2TB WD RE-4 disks. (WD2003FYYS)
It took just under 40 hours to build a 7-disk RAID3 (12TB) with one set as hot-spare. I never tested a rebuild on that array, and I also never built an 8-disk RAID3, because I was afraid it would take even longer than the 40 hours it took to build it and I didn't have that much free time. Sustained data throughput speeds were 759MB/sec write, 696MB/sec read. This is with the cache disabled so as to represent actual disk speed.
It took 4 hours 54 minutes to build an 8-disk RAID6 (12TB). Sustained data throughput speeds are 816MB/sec write, 714MB/sec read, cache disabled.
After loading all my media (5.75TB, almost half capacity) onto the RAID, I pulled a disk to simulate a failure, waited a half hour, then reinserted it to force a complete rebuild, which took 7 hours 58 minutes.
During the rebuild, I continued to edit a movie project I'm working on (shot on 5DMkII, 7D and D7000, so same H.264 stuff you're using) with no problems. I also ran another speed test while it was rebuilding to see how much speed I was losing. The results were 488MB/sec write, 191MB/sec read with cache off, and since I was editing with the cache on, I ran that test, too: 482MB/sec write, 3535MB/sec read using a 16GB file to perform the test.
I was surprised that the RAID6 was faster for me than the RAID3, given the data/parity ratio. 7 disks in RAID3 means 6 disks are striping data and one is writing parity. Likewise, 8 disks in RAID6 means the same 12TB data size, but the stripe is going to all 8 disks along with parity data, and allowing 2 disks to fail, similar to my RAID3 with hot-spare. (The hot-spare would take over for the failed disk and rebuild, giving the same 2-disk failure protection before data is lost, provided another disk didn't fail during the rebuild on RAID3.) I still wonder how it would have compared if I'd used all 8 disks in RAID3 instead of saving one for hot-spare.
All this says is that there are different ways to skin a cat. After I finish my current projects and I have a week or two to spare, I'll do more tests with RAID3, but for now I feel very happy with a RAID6 that gives awesome performance and redundancy. Hopefully, this gives you something to work with as you plan and implement your RAID. Good luck and happy editing!
Those are impressive times. So does that one array handle everything outside of the OS, or how do you arrange things with such a system?
12TB RAID6 - Media, Exports, Previews
2TB AID0 - Cache scratch
Externals - backups and clone
RAID6 is inside a Sans Digital TR8X, connected via 2x mini-SAS cables.
I have been tinkering with the four remaining internal drives. They are the original 7200rpm 640GB drive with the OS and programs, and three 7200rpm 1TB drives.
For a long time, I had the 3x1TB in AID0 for scratch, which got 330MB/sec read and write. Recently, I wanted one more disk for backups (too many projects!) and decided to try out using two in AID0 for cache scratch, and putting preview files onto the RAID6 in another folder. The 2x1TB AID0 scratch gets 215MB/sec, and performance seems the same when editing.
For backups, I have a bunch of single disks and external enclosures that I attach via cable or a Voyager Q for the bare drives. Well, and that one 1TB internal that I'm temporarily using for one active project. I got a couple little WD My Passport 1TB drives for third backups of very important projects, and I also have a clone of my OS.
I could probably put all scratch, previews and media on the RAID6. I may test that out next, since it *is* pretty fast. I chose the internal disks for cache scratch since I have smaller write blocks there. On the big array, the block size is the max, which I think is 128k, whereas the smaller internal array is a default size like 32k or something. Since many of those media cache files are only 4k, and the rest are mostly under 200MB with two 1.4GB and seven 3.3GB files, it seems logical to keep them on a different array that takes advantage of smaller files. The preview files are all over the place in size, from 4k to over 3GB as well, but I tend to delete them quite often, and not even create previews most of the time, since my work has been playing real-time without any rendering. Most of my work is DSLR footage right now, and only one or two effects laid over them, like color correction and such.
I also have 32GB of RAM, which may help. The end result is smooth and painless!
Very impressive and interesting results wonderspark, real-world-workstation results are definitely gold when it comes to research. Thanks for taking the time to explain/list them for me(and all of us).
1.) Will your results scale down in terms of giving a vague estimate on RAID building/rebuilding times (e.g. based on your estimates, a 4disk RAID6 (500Gb each) would take an 1.5-ish hours) ? I know there are a ton of different variables at work, but that 40hours to build that 7 disk RAID was crazy (then agian, im wet behind the ear when it comes to RAID so go figure). What is comforting however, is the fact that you said there was no problem editing when your drives were rebuilding from a failed disk. So i guess my dilemna now is just figuring out the wisest configuration for my spec system. What would you do with five 500Gb re4s?? My initial thought is this (this is all w/ a raid controller 512cache): 128ssd (os/progs). 2tbWDblack (internal backup/random nonproject data) 4 disk RAID3 1hotspare.
Then again, i've been editng on a 2TB lacie (which scares me now that ive been reading how bad lacie's rep is) connected via esata this past year, so anything would be heaven in comparison. lol.
2.) Right now amazon is selling the 500Gb WD re4's about $50 cheaper then newegg, and they both are Sata2.0's. Considering that any combination of a 4 disk RAID could approach/exceed the SATA 2.0 limit, would i essentially be limited to the SATA2 ceiling????
3.) You mentioned putting your scratch/previes on the same media drive (which is basically what im going to do). Have you done this?? any noticeable loss in performance??
Thanks in advance for any help/advice you can send my way!
Let me be sure you're on the right path.
You have indicated getting an ARC-1213-4i, which includes one SFF-8087 connector. That is good for four disks. You mentioned building a 4-member RAID with an extra disk for a hot-spare. That card won't help you there, because 4+1=5 disks, so you'd at least want the ARC-1223-8i to run more than 4 disks. You get one disk per port, thus a 4-port card runs four disks, 8-port runs eight, and so on.
If you're going to put those disks in an EXTERNAL enclosure, you'd be better off getting the ARC-1223-8x instead. (x=external, i=internal connectors for Areca cards, and ix=both connectors are on the card.) The reason for this is many vendors include the SFF-8088 cables with the packaging of their enclosures, which are the connector type used by those external enclosures. (SFF-8088 is external connector, SFF-8087 is internal connector.) If you're planning to buy an internal port card, you'll need special cables that cost $50-60, and then run them out an empy slot or other hole in your computer to get out to the RAID box. The only reason I can think of to do this is to take advantage of a card with more than 8 ports, like the ARC-1880/1882-ix-12/16/24 cards. (I have an 1880, but now Areca has a newer card, the 1882 series.) Furthermore, you don't want those mini-SAS cables to be longer than one meter, or you could have issues with data (so I've heard.)
You can get the 8-port card all external, but more than that, you get internal ports and would then run them out via those special cables. I did this with my setup because I wanted to be able to take advantage of the larger memory (up to 4GB, though I still have standard 1GB) and be able to attach up to 16 disks with my 1880ix-12. (This is a bit confusing, but I emailed back and forth with the techs at Areca in Taiwan, who assured me that despite the card being labeled a 12-port card, it runs with all three internal and one external connections used as 16 discreet ports. I'm not 100% convinced yet, as I haven't actually hooked up and tested speeds with 16 disks.)
Hope I haven't lost you, yet.
To answer your latest questions:
1. The results should scale somewhat with the disk size, but also consider the sustained data throughput of the disks you buy. I have 2TB WD RE-4 which have a max throughput of 138MB/sec. The 500GB version is lower, at 128MB/sec, so add some time to the build/rebuild there, and also reduce the total speed you'll get from the RAID by 10MB/sec per disk as well. Considering you want five of those 500GB disks, you need at least an 8-port card, regardless of how you set up any RAID3/5/6 with or without hot-spares. What *I* would do with 5x500GB disks is try a RAID3 with all five if you have time, and test it. Then try the same with RAID6, and test it as well. If you have a *lot* of time, you'll also want to test the rebuild times by pulling one disk out of the array to force a failure/rebuild and see how long that takes, as well as test speeds during the rebuilds.
2. You'll be limited not by SATA, but by disk transfer rates, and things like where you put the RAID card. The Areca card you want is x8 lane PCI 2.0, so you'll want to put it in a PCI slot that has x8 lanes (or more) at PCI 2.0. If you stick it on a PCI 1.0, or say an x1 or x4 lane slot, you'll potentially cripple your speeds that way. Put it this way... I tested all eight of my disks in AID0 and got sustained 1100MB/second both read and write speeds in those same disk tests I listed earlier. If you do the math, dividing 8 disks that have max speeds of 138MB/second, you can see that's as fast as it can possibly be in that configuration. 138x8=1104. So forget about SATA 2 and all that. This is different, because you're combining speeds by writing to many disks at once. Think more in terms of PCI lanes and their speed capabilities, combined with disk transfer rates multiplied together.
3. I have done that in the past, and not *noticed* any difference. There could be a difference, but I wasn't able to quantify or identify it. Right now, I have my previews on my RAID6, and cache files on another internal AID0. This relates to my theory given data block sizes. My RAID6 has the biggest block size it will do, which I think is 128k. The internal one is smaller, maybe 32k. I figured since so many cache files are only 4k, it would make more sense to send those smaller files to that array.
Again, Thanks a bunch for the detailed response, i'm sure a lot of future people that google "RAID" + "Storage" + "Premiere" will find your answer(s) and this thread in general, very very helpful.
Regarding "hot-spare", i definitely showed my lack of knowledge and assumed it meant a disk kept on the side in case of a failure, sorta like a spare tire. But now in this context, I understand hot-spare to be a disk in the ACTUAL RAID-- on standby, in case of a failure right?? In this context then, is this disk totally on standby, meaning there is absolutely no performance gain with the disk.. so for example-- a 4 disk RAID, with one of those disks being a hot-spare... is actually a 3 disk RAID??
With my original (albiet) inaccurate definition of "hot-spare".. my plan (with 5 disks) was to use a 4 disk RAID in a 3/5 or 10 setup, with 1 disk on the side as a failure replacement... in your experience... assuming a hot-spare is just a drive that connects to the RAID as a backup and yields zero performance gain... would a hot-spare vs. manually swapping out a failed disk--- be worth it in my context... which is that i'll be limited to the ARC-1213-4i's (i will likely get that to stay near my $1k budget) 4 disks limit.. so a hot-spare would essentially give me a 3 disk RAID as opposed to a 4 disk RAID with no hot-spare. I hope that made sense. BTW, when a drive fails in a RAID, will the computer tell me which one failed?? Or would i have to listen to which one is malfunctioning?
One more question, if you don't mind.. which was sparked by that insane 1100MB time you pulled off... in regards to video editing.. especially the native 5D files.. can you see/experience a noticeable difference between the 200+MB, 300+MB and 500+MB read write speeds?? Depending on how i combine my 4 disk RAID, especially if I just gamble with a 4 disk aid0, all those speed limits could be attained.. I myself have been editing cineform converted 5D .h264 files through an eSATA LACIE for the past 2 years, so i really have no context on how much more different it would be to work with storage that transferred data 4x faster than what im used to...
In addition, when editing footage that has already been captured (which im assuming would utilize the RAID's write speeds) im guessing the most important aspect would be the RAID's read speed right (was tempted to type it as "write" to add even more confusion to this reply lol).
Thanks again for all your help on this. appreciate it!
A disk set aside to be used as a spare might be called a "spare," whereas if it's in the RAID and assigned by the RAID card as a "hot-spare," it will not read or write any data until a disk in the array fails. You are correct on that now. I've seen it referred to as a " 7+1" in the example of 8 disks in a box, with 7 in a RAID and one assigned as hot-spare.
If you really can't push another hundred or two hundred dollars for an 8-port card, then you'd be better off using four disks in your RAID and setting aside that fifth disk for a spare. You'll most likely be right there if/when the alarm sounds for a failed disk, and you can swap it quick and easy. Any decent RAID will tell you which disk has failed. It will be fairly obvious based on the lights on the front, and the Areca management software tells you as well. You'll want to get to know your RAID box, and which slots are represented by which lights on the front as well as which slots are represented by the management software.
The thing about real-world experiences and benchmarks is that they go together. I was having some stuttery playback prior to building my system up, and I did my best to pinpoint where the issues were and what solved them as I went along. I upgraded my RAM from 16GB to 32GB, and that finally stopped my page out / swap file issues. I also ran disk speed tests at every point to see how they reflected what I was seeing. It's hard to quantify exactly what changes are making real-world improvements, and I feel that those tests and benchmarks help identify what is working and what is making no difference. I changed the RAM and the RAID pretty close together in time, and I forget which I did first now, but I recall going from 3-disk AID0 to 7-disk RAID3 made my DSLR playback nice and smooth. I also know that I had a couple system hangs, and I related them to the page outs / swap file useage, which was solved by 32GB of RAM. Finally, I moved from a quad core CPU to a 6-core, and to be honest, it was the least noticable in real-world improvement, although it is noticable when rendering After Effects. I have the CPU allocated to using 8 threads for Pr/AE/En and the remaining 4 threads for other processes. I have had no system hangs or issues since these three major changes, so all-in-all, they fixed me up, and I feel pretty confident in what helped where.
In my case, I needed the larger RAID anyway. I have multiple projects that run concurrently, and can't be swapping data around all the time. I currently edit two feature-length movies and several smaller commercial projects day-to-day, and have only half my 12TB RAID filled, which is most useful. It also keeps me from spreading myself *too* thin, as one guy can only do so much work at a time! I find myself tempted to get another box to throw more disks into, but I really don't need it just yet. I'm glad I have the ability, however. This is why I hope you're not limiting yourself too much with a 4-port card.
I suggest you run tests and take note of your issues with your current setup, and document each upgrade and how it changed both tests/benchmarks and observable results.
Ok cool, again thanks for your advice/insight. So as to not drag this thread on any longer.. I've pretty much found the answers I was looking for here.. but i have one final issue... in regards to editing with Premiere.... I will need a target to shoot for in terms of sustained transfer speeds... (im assuming this is the stat that correlates to a better editing experience)
I just posted this same question on the new article that Harm wrote.. but wanted your opinion as well since you mentioned that you post 5D (or you mightve said HDLSR) footage (the main source footage that I edit these days) as well....
Basically, what Sustained Transfer Speed should I look to reach when planning out my RAID strategy?? In addition, at what speed threshold would I not even see a noticeable difference?? For example, if all i edit on in the near forseeable future is 5D files...and if 350 MB/sec is more than enough to edit native 5D .h264 files.. than building an 8 disk RAID3 would be overkill based on my specific current projects.
BTW, i will take your advice and spend the extra dough on an 8, probably 12 channel card.. while my current projects revolve mainly around 5D footage (which is why I want a specific sustained transfer rate target to shoot for).. I do have a deposit on a stage 4 Epic-X and want to build a setup now that can be flexible enough to handle those r3d files.
I don't know that I could figure out a target speed for sustained transfer. I think there are more variables at play that make it a mess to calculate. I just know that when I was editing a 104 minute movie shot on P2, a simple 3-disk AID0 was fine. Then, when I started this similar length movie shot on 5D, it started to choke. I seem to recall some slight hiccups when I pulled a disk from the RAID to force a rebuild during tests. My sustained speeds took a hit to 488MB/sec write and 191MB/sec reads with cache off, and I was still able to edit just fine, but I want to say there was a time or two where playback would be stutter slightly. It would make sense, so perhaps a good target is 500MB/sec sustained reads and writes, if your timeline is like mine currently... 5 video & 10 audio tracks with effects and such which is not rendered. (I like not having to render things!)
I needed more space anyway, so a large and fast array was an excellent choice for me. I looked closely at my RAM stats and saw the page outs and swaps going on, so I knew that 32GB was another great move in my case. I tested renders using various core/thread settings in After Effects, and realized that a 6-core would speed things along as well. All these things combined not only made editing this 5D movie a joy again, but allow for expansion when I finally get some RED footage to work with. I hope and believe it's good enough for that.
I know you want a clearer answer with a hard number, but I don't think I can provide it. For that, I apologize. Based on your similar need to edit RED in the future, I can't help but think you'll need a robust RAID to make that experience smooth and confidence-instilling. I saw someone say you don't need that much power to edit RED, but maybe they only edit straight footage without many effects or layers. I don't know. I throw effects and layers around like mad, and I don't have to render anything at all. I don't even have a CUDA card in my system right now either, because my 5870 seems to work better than my GTX285 did. (I'm on a Mac and have limited choices.)
You will be happy with an 8 or more port card over only 4, this I can assure you.