It is no secret, Eric is an advocate of Raid5 and has very positive experiences with the Intel controller, which is a lot more affordable than an Areca.
My favorite is Raid3 and hence Areca. Areca is the only one to support Raid3, like Atto is the only one to support Raid4. The new 1880 is the successor of the 1680, has a much faster IOP and supports SATA 6G. Given that the price differential between the 1680 and 1880 is small, it is a no-brainer: 1880-ix-12+. Initial test results by Areca show a performance increase of around 70 - 100 %, but be aware that this may be marketing hype, since I have no other comparisons.
The big advantage of Areca over Intel is the capability to expand cache memory to 4 GB for the 12+ models and with Intel you are limited to 512 MB. Depending on your workflow and editing requirements 2 GB may be enough (it is for me) but in some cases 4 GB may be better, albeit at 5+ times the price of 2 GB.
I just purchased the Areca 1880ix-16 as part of the build I am finishing and hope to be able to generate some benchmarks for the discussion when I have it up and running. Although I am not connecting SATA 6G components/devices to the Areca card (unless I decide to connect my SSD O/S drive), I purchased the newer version because the cost differential wasn't that great between the SATA 3G and SATA 6G versions and I felt that the investment in the newer technology for the RAID controller would likely be "portable" into the next systems that I build - hence, some forward compatibility.
I decided to get a minimum of the 12-port version because of the capability to increase memory - whether I purchased it now or later.
In looking at current pricing when I decided to purchase, I found that the 16 port version was only nominally more expensive than the 12 port version, so I opted for the more expandable version although I will likely only connect 5 - 7 devices to the controller at this time.
I agree with Harm on the cost of the memory upgrade - it really is proportionately more $$$ and one really pays for that! I figured that I was "all in" on the whole thing anyway - so I went with the 4 GB memory. I can't wait to benchmark it, and the different RAID levels available!
I too am waiting on my ARC-1880. I also went with the 1880ix-16 version because of the small differential in price. One interesting feature of this board is while it is appears to be a 16 port board it actually is 16 internal ports PLUS 4 external ports. This was verified by Areca tech support. Some cards when they have both internal and external connectors only have an either/or arrangement for four ports. All the higher end on 1880 cards that have the additional 4 external ports actually have four more ports than the part number indicates.
Also it pays to shop for a better price for these boards if you are not in a hurry for delivery. I will also report on performance once I get my card.
It appears that RAID3 will better serve my objectives than RAID5 in spite of the $383 cost differential for the components I am considering (areca 1880ix-12, areca 6120 BBU, Chenbro SK32303 and Spinpoint F3s). What I need help understanding before I spend the estimated $1,451 for the array is . . . how much more speed I will gain from the array in addition to its other benefits?
My plan is to replace my four current Western Digital WD1002FZEX 1TB HDDs with the Samsung Spinpoint F3 HD103SJ 1TB HDDs. Five of the F3s will be in hot swap bays.
I assume that the array should be set up as a seven disk RAID3 with a one disk hot swap. Please let me know if another configuration might be considered.
I understand that I can expect a performance increase of around 85% (X-1) on reads and 60% (X-1) on writes over a single disk. What I don't understand is how those speed increases will translate to the benchmark test results.
My current four WD HHD set up yeilds the following PPBM5 test results (DJM_5.0):
MPE on: 130, 45, 95,12 for a total of 282
MPE off: 130, 45, 95,126 for a total of 396
What might I expect to see as benchmark test results as a return on the $1,451 invested in the RAID3 configuration under consideration?
Your concern should never be benchmark results!!! Only day-to-day practical performance.
The only area of visible improvement would be on the Disk I/O test, all the other test results would likely be marginally impacted. Guessing is that your disk results would probably decrease to below the 100 s mark.
I quite agree with you about test results and practical performance.
That said, and as you obviously are aware, the benchmark testing is all about giving one an "idea" of the comparitive performance of a given combination of key components.
Thanks to you and the others who guide such fools on computer systems as I, reference to the benchmark tests which you and Bill have developed are an important means by which we can estimate the impact of a given dollar expenditure on the development of a video workstation.
I will be forever grateful for the guidance which you and others have given me as I have stumbled through the process of developing a very good workstaion at a reasonable price to meet my current needs and objectives.
Thank you once again, your "guess" gives me the answer I have sought.
Harm and other RAID fanatics. I am making a small contribution as I have faced the demise of a 1TB WD in my RAID 0 software setup. Luckily I didn't have much data. The loss was caused by a power outage during an encoding of a project. I am now looking for a hardware solution and may be leaning towards RAID 10 after this article http://miracleas.com/BAARF/RAID5_versus_RAID10.txt I am hoping that I can find a good RAID 10 External Directly Attached via eSATA.
Thanks for all the info!!
What you really need is agood UPS
I love it when people skew their arguments to support their belief. The same corruption the presenter talks about also is the main problem with Raid 1. If the Primary disk is corrupting over time and the mirror disk fails then the entire corruption will rebuild on the new disk. There is no parity to help verify the data. There is no spreading of data across disks so the possible corruption loss is minimized. The entire corruption is transferred to the new mirror and you have no idea the corruption even exists. With the parity raids, I can run/schedule consistency checks and verify the data is not corrupting across the volume. If it is, the raid controller will immediately attempt to repair it from the parity. This is why consistency checks/integrity checks are part of the maintenance for raids. The presenter even states the reason why this is more key now than in the past due to drives failing over time much more often now than in the past. The performance on a raid 5 with the same amount of disks as a raid 10 array is far better during optimal status. Granted rebuild status takes longer but now days that is down to 1 to 3 hours with a decent controller. If you require better security then raid 6 is as good as it gets. Oh and then there is raid 50 and 60 for those who need even better performance and need that many disks.
The other part to this is the skewed view of background rebuilding and the performance loss during rebuilding. Listen folks, the background rebuilding is not as bad as it was 3 years ago and you can control the % applied to operations and to rebuilding. If you require greater security during rebuilds then raid 6 with a global hotspare is more than enough and the performance loss is not that bad. I have yet to lose a raid 6 array to corruption. I can't say that about any other raid I have used.
Final note was the parity order the presenter placed at the end. How is raid 3 and raid 4 any different in the parity corruption than raid 5. The answer is it's not. They have the same chance other than it's 1 dedicated parity drive. So the only reason to place them in that order is the rebuild load which is way over blown. Be careful when you read these arguments because many are skewed as you see in this one. There is a reason that Raid5/6 are the most common raids used in the IT world and it's not because the controllers are the cheapest. There are times when raid 10 makes sense but more often than not that is not the case.
I have a UPS with auto shutdown but it did not work while I Ppro was encoding.
I have been following Harm's and your postings on Adobe Forums and I am still confused as to the correct solution for my situation. I would value your advice on this. I currently shoot with the 5Dmk II. I rarely run more than 2-3 streams of HD footage at any one time. My business is brand new so the volume does not yet justify a large capital investment but it is projected to grow quickly. I am confused as to what sustained throughput I should look for in my storage options. None of my renders or conversions took that long with my past 3 drive set up. 1 OS and 2x RAID 0. I did have a raid failure though and lost a small amount of information. If my througput requirments are 3gb/s then couldn't I use an external RAID set up with an eSATA interface. It is unlikely that I will have requrments to store more than 750gb of raw project footage on a fast RAID at any one time. After each project I could move the files to an inexpensive NAS. This setup is assuming that I only need 3gb/s of throughput.
First let me explain something,
- The 3 Gb/second and 6 Gb/second have no bearing on the real performance of a drive, it is just the rating the interface.
- A vast majority of the submitted PPBM5 results list have disk drives that have the interface rated at 3 Gb/second. If you look at the best "Bang-for-the-Buck" chart the three top systems all have 3 Gb/s interfaced drives.
The difference between these 3 Gb/second drives and the newer 6 Gb/second interface drives on a single individual drive is almost insignificant. The only time you will really get gain out changing from the 3 Gb/second to 6 Gb/second would be on a much larger array. Now some of the newer drives will generally have the latest larger cache memory that might be performance enhancing for some applications.
If you are happy with the performance of your current drive configuration and only want to improve reliability add another drive or two and switch from RAID 0 to one of the other RAID levels which really have redundancy.
Message was edited by: Bill Gehrke
- The 3 Gb/second and 6 Gb/second have no bearing on the real performance of a drive, it is just the rating the interface.
Thanks for your reply Bill but my question is still partially unanswered. My last RAID 0 setup with two WD Black 1tb gave me ok performance. Tom's hardware says the min read/write for these drives hovers around 54mb/s and avg around 108. So at two in RAID 0 I should be roughly around 100mb/s min. - 200mb/s max no? If this assumption is true than I know that I need a redundant RAID setup that meets or beats this benchmark to maintain the workflow that I had with a software RAID 0. I am trying to determine what the most cost effective (not cheapest) route would be. I was looking at RAID 5 solutions as well as RAID 3 that are independent and external of my machine. My question is will an eSATA connection (3/6 gb/s) become a bottle neck for a small (4-8) drive array? If I had 4 WD drives as an example they should be putting out 280-350 mb/s total from what I have seen of Areca and Intel controllers but this does not come close to the 3gb/s speed of the SATA ports. Am I correct in this assumption?
Well you are mixing Apples and Oranges you are talking Areca or other controller and external eSATA connections. These really have nothing in common. If you get an Areca controller with an external connector it is not eSATA. It is four lanes (called Multi-Lane) of SATA data per SFF 8088 connector. With this you would need at least a four disk enclosure with a power supply and the Mini SAS 4x (SFF 8088) connector. I do not know any commercial candidate units without disk drives, but maybe other forum members can help with that, generally people that sell these units install disks. You might contact Tekram as they are an Areca distributor and provide RAID systems and see if they supply just the enclosures.
There are units with built-in RAID controllers that have eSATA interfaces but these generally range from poor to fairly good. Most of the time they are also sold with disks installed. I would have to insist on repeatable test data before buying one of these.
And yes a single eSATA connector most likely would bottleneck somewhere in the process, that is precisely why any good RAID controller for external use does not use eSATA.
3Gbs stands for Gigabits and not Gigabytes. You have to divide by 8 to to get the equivalent Bytes. 3Gbs is 375MB/s which is the max transfer rate per channel. There are several options you can select for external raids both SAS and E-Sata. The E-Sata raids will be slower because they use Multiplexing which basically puts multiple E-Sata drives on 1 channel. That allows greater external volumes and cheaper solutions but you lose speed. The best E-Sata multiplexing controller that supports raid give you a transfer rate of about 230MB/s. That sounds more than enough for what you are doing. SAS arrays give you far greater speed depending on how many drives you use. The cost is also greater mainly due to the controller required. Please take a look at the following link. This unit is totally hands off. You just flip a switch to raid 5 and it does everything else. If you want the best speed then buy the 4 Port E-Sata controller as well. The controller that comes with it will only give you about 110MB/s.