Skip navigation
Currently Being Moderated

Advice on Storage under $800

Jan 10, 2012 1:27 PM

Tags: #cs5 #storage #premiere #advice #raid

Happy New Year everyone.  While I am new to this forum, I've been using Premiere for over a decade (Premiere 5.0), back then people (AVID editors then Final Cut) used to laugh at me lol.

 

ANYWAY, like a lot of discussion starters here, i'm planning on building a new system SOON.  During the past few days of researching, this forum has singlehandedly changed my original setup most noteably going from the Quadro 5000 to the GTX 580 and am heavily considering using a RAID setup.  So special thanks to everyone here for your time in educating us all!  Despite the fact that ive been a Premiere user for years, I have never consulted this forum... which in hindsight, has been a tremendous mistake.

 

Case in point is my current system that i built 2.5 years ago, an i7 920, 12Gb ram..quadro 3800 and (dont laugh, but if you do, im used to it, see above) 3 1TB 7200 drives SATA drives. 1 for programs/OS and the other 2 drives contain ALL my projects, music, stock footage etc.  I edit ALOT of 5D footage and AE and have been quite successful with the system under tight deadlines even winning a Canon contest (everytime Im under these time constraints I always get nervous the system won't hold up, sorta like Han Solo with the Millenium Falcon). 

 

So after that long winded setup, to all the great minds here I am asking for your advice.  Currently I have about $3000 budgeted for a new system, while I researched a ton in the past few weeks much of my knowledge came from books and sites that were geared more for general purpose computing and/or gaming.  This forum is different, and so I am hoping to get some great advice here.

 

After all the other hardware and software (win 7, production cs5) costs, I have about $800-$1000 I can dedicate to storage.  Any advice which way I can go, I know anything would be better than what I have now.  I originally thought I could buy a 120Gb SSD for my OS/programs, a 256Gb SSD for my current projects and a 2TB for backup, etc.  But of course, this forum has taught me better. Especialy the fact that SSD's aren't optimal for editing HD files AND RAID's are the way to go. 

 

The storage system will have to be expandable because I have already placed a deposit on the Epic-X, so i know I would tons of storage for those 4k files.

 

Also, I know I may be advised to get a RAID setup, however I have never done this before.  Can I use any decent drive?? i.e. Seagate Baraccudas or WD caviar blues??? or do I need to go for drives that are geared for RAIDs.  Are there any good online resources for setting up RAIDS??

 

MANY, MANY, MANY THANKS in advance for your time and patience with my (for lack of a better term) case.

 
Replies
  • Currently Being Moderated
    Jan 10, 2012 2:23 PM   in reply to StarMarc

    Your starting point is quite good, despite it being a 2.5 year old system. But the fact that you mention Caviar Blue, shows me you need to do some more reading. If you click on the Overview tab at the top of the forum, you will find a number of FAQ articles that may get you started on topics like what disks to use, whether raids will help you, and all kind of stuff like that.

     

    Considering that you want to use material from a 5D and from an Epic-X, you better be prepared for a lot more than $ 800 - 1000. Your CPU will be taxed to the extreme, your RAM memory is not enough, your Quadro 3800 is better exchanged for a more capable card and you will need a much better disk I/O system.

     

    Reading the material you want to edit and what you currently have, I would start looking at a system along these lines:

     

    i7-3930 CPU

    X79 mobo

    GTX 570 video card

    32 GB DDR3-1600+ memory

    Dedicated raid controller

    As many disks as you can afford (not easy with current day prices).

     

    Once you have read more about these issues, we are glad to help when you have specific questions.

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 10, 2012 5:31 PM   in reply to StarMarc

    Why not select a GTX570. Almost the same kick, but $200 less.

     

    If you have some room in your case, consider buying 2TB or 3 TB drives, and you can purchase an inexpensive RAID card for ESata and build your own RAID internally. RAID cards don't have to be hundreds of dollars. You can buy them from about $40 bucks, and most of the time they will do just fine.

     

    If your case is full, I have had good luck with a GTECH 4TB external, connected via ESata, once again the card is relatively inexpensive under $50.00 for sure,

     

    So, you have a few choices. If you have lots of cash, following Harm's recomendations are fine, but if you want to stick to your budget, you can try either of my solutions

     

    Good Luck

     

    D

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 11, 2012 1:20 AM   in reply to Darren Kelly

    Darren,

     

    I completely disagree with your advise about a $ 40 raid card for eSATA. That is a complete waste of money. If you want to have a software aid0, use the on-board capabilities and accept the load on the CPU. A software raid card for eSATA is even slower than on-board but costs $$, even a meager $ 40, without any benefit. On the contrary, it has only disadvantages. If you want a parity raid, a software controller can easily bring the system to its knees, because of the significant CPU load. In addition, these software cards always lack sufficient ports for a serious parity raid.

     

    A GTech external over eSATA is dead slow and no faster than a single disk over eSATA, because it is not a full duplex connection. It may even be slower than a single disk when used with a raid5 array with distributed parity. The only reason to use such a device is when a 3 TB single disk is not enough storage.

     

    With file based ingest, the need for parity raids has increased significantly and the use of aid0 is very hazardous, unless you have a very good backup policy and live by it, but that requires investing in additional disks for those backups, either a number of single backup disks or a NAS. Even if you have a good daily backup schedule, it still means you can't use your memory card from your camera until you have made a backup and that may entail investing in additional memory cards as well.

     
    |
    Mark as:
  • Trevor Dennis
    5,918 posts
    May 24, 2010
    Currently Being Moderated
    Jan 11, 2012 2:23 AM   in reply to Harm Millaard

    Harm, I owe heaps and heaps to the advice Bill and yourself provide on this forum, but I am conscious that your advice includes the importance of building a balanced system.  When I first read the OP’s list of components, I wondered whether a GTX 570 would make more sense than a GTX 580 with the rest of the components.  I also wondered if a dedicated raid controller would be overkill. They are not cheap!!!!  Looking at the PPBM5results, the current forth through to eighth placed systems are using on-board raid0, and while their disk I/O times are a wee bit behind, I am not convinced that the difference is worth >$1000. OK it is different for other than raid0, but raid0 represents by far the most common configuration in the PPBM5 results table.

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 11, 2012 4:09 AM   in reply to StarMarc

    StarMarc,

     

    Regarding decent or enterprise HDs for RAID:

    - enterprise drives do not support RAID 0, sometimes called "non-RAID" or "AID" (without the R which equals redundancy), unless you switch off the typical enterprise feature that drops out the drive from service if it is non-responsive after a configured time, say 7 seconds

    - hardware RAID with parity (RAID 5, 6, etc) is best done with a high-quality controller card (generally $500 and up) and while many manufacturers only certify them with enterprise drives, many users (myself included) are running them very successfully with proven, high-quality 7200 SATA 2 or 3 drives

     

    Regarding where you sit right now:

    - any money spent on rotating drives right now has a VERY high penality due to the floods in Thailand and possibly other places in the world that impacted manufacturing

    - Epic-X (5K!?) is pretty intense stuff and will be pushing your hardware pretty hard

    - as Harm pointed out, your current system is not exactly weak

     

    If I were in your shoes now, I would grow your current system in ways that will really increase speed now, and it ways that will minimize "throw-away" in the future:

    1) case - if you already have a case with good air flow, lots of drive capacity, and quiet enough for your taste, great; if not replace it now

    2) power supply - like the case, if what you have is powerful enough for a high-end editing system (850 watts or more), and quiet enough for your taste (more watts = less noise at a given duty level), then keep yours, if not replace; note I use a AX1200 power supply which is more watts than required, but it does have a large fan and large heat sinks that keep it quiet when the system is working hard

    3) video card - GTX 580 may be more than other choices, but as RED media is so taxing, I would go with that card or better (I don't know what nVidia is up to, but is does seem to be about time for newer, faster GPUs to be out soon, if they are not already)

    4) RAID controller - choose a good 8 or more channel card (Areca, Intel) that will serve your needs for several years and add the battery backup option

    5) Get a 120 or 128GB Intel or Crucial current generation SSD boot/programs drive

    6) Put all 3 1TB drives in a RAID 0 configuration (assuming that they match) on the new controller card, or find another matching new or used 1TB drive and make it RAID 5

    7) Buy a 3TB 7200 drive for backups (price of drives seems to have hurt 1TB and 2TB choices more, so I think 3TB's are better value, and they also tend to be faster); I like using a hot-swap case mounted drive bay for backups to motherboard or RAID card ports (I cold boot when add or remove the backup drive)

    8) Replace your RAM wtih 24GB of 1600MHz RAM that RAM vendor says will work in your motherboard (what is the vendor/model BTW?); this is very inexpensive to do now; I just paid $129 for a matched 24GB set of 1600 sticks for X58 from G-Skill for a new Gigabyte board I'll be building out

    9) Get a good CPU cooler and overclock your 920 to about 3.7 GHz, or more if you want to spend a lot of time tweaking; while it would start becoming "throw-away" expense, a 6-core X58 CPU would also be a reasonable choice for you to make now to get another year or 1-1/2 years out of X58 before a totally new CPU/motherboard that would likely blow-away the currently available X79 CPU options (i.e. 8-core die with only 6-cores available for use!)

     

    Regarding the justification for the big outlay for a dedicated controller:

    - allow for the speed of RAID 0 and the data loss protection provided by the ability for a drive to fail and work (and data backups) to continue

    - very portable from one PC to another (and PC bios settings cannot mess up your RAID or your data)

    - hardware RAID controllers allow access for drive SMART (internal drive level diagnostics) to be read from each drive without breaking the array

    - backup battery prevents data loss in additional ways, and even when you already have a system UPS

    - generally allow for larger RAID arrays, although some motherboards are coming with massive SATA port counts now

     

    Jim

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 11, 2012 7:42 AM   in reply to Trevor Dennis

    Trevor,

     

    You are absolutely correct that a large proportion of the top-performers use (large) Aid0 arrays. I wonder if the reported disk setups are indeed used during everyday editing, because using 6 or 8 disks in an Aid0 entails huge risks of losing all your data. I think the question is justified whether these configurations reflect everyday editing, or were only used for bragging rights. And this may also include the 3 disk configurations. Let me reiterate that the purpose of the PPBM5 benchmark is not about bragging rights, but about stable, reliable, everyday editing configurations that work.

     

    To go back in history, talking about the Seagate 7200.11, just the other day the last out of 7 of these disks died, so I have had a failure rate of 100% within three years and now notice that Seagate has reduced the warranty period to one year only for new disks.

     

    I'll come back to your raid question, but first look at #1, where I suggested the 570, just like you thought. The 580 seems overkill.

     

    Raids are a difficult topic. For many it is unknown territory and yes, the cost is huge for a good raid controller. Why would people still consider it? IMO there are two major reasons for investing in an expensive card and the necessary disks and two minor ones.

     

    Major advantages are the speed and the protection against data loss.

     

    Speed is only relevant for those editors that use multi-cam, or multiple tracks, or high quality, high resolution source material. If one only uses medium quality, like AVCHD, material (4:2:0) or a few tracks, speed is not the overriding factor. But if you use 4:2:2 material @ 50 Mbps or more or use Red/Epic 4K or 5K material and use multiple tracks and multi-cam, then speed becomes of the essence.

     

    The second aspect is protection against data loss. Protection can be bought by diligent backup policies and adhering to those policies, but that costs production time. You can't afford to make a backup tonight when working with tape-less workflow's, you have to create a backup immediately after ingest and that costs time and time is $$$. A parity raid array protects you from disk failure, so you can continue working even if one or more drives fail and you can make your backup during the night, without losing production time.

     

    A minor advantage is the availability of extra SATA connections when you run out of those on the standard motherboard and the second one is that you do not experience the performance degradation when a disk fills up, like single disks have.

     

    Whether these arguments are worth the cost of a dedicated raid controller, everybody has to decide, but in my case the clear answer is YES.

     

    Let's say that each backup will take 15 minutes after you ingested new material and your hourly rate is $ 60, that means that each backup costs you at least $ 15. Not talking about restore times, you will have earned back your $ 900 raid controller in 60 days, you will have better response times during your editing sessions and you have piece-of-mind that nothing can go wrong, apart from a bolt-from-heaven, terrorist attacks, hurricanes, flooding and other circumstances that are excluded from your insurance anyway.

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 11, 2012 7:05 AM   in reply to StarMarc

    Marc,

     

    Have a look at my reply # 8 to Trevor. It may help you decide.

     

    I also like to draw your attention to a new article I wrote (was about time after nearly two years!) on Adobe Forums: What PC to build? An update...

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 12, 2012 7:32 AM   in reply to StarMarc

    Marc,

     

    1. The 1155 platform is limited to 16 available PCIe lanes. No more, no less. Manufacturers add various chips to expand that number, but conveniently forget to tell you that those extra PCIe lanes are shared. It boils down to the fact that even with extra chips on the mobo to artificially increase the number of PCIe lanes, it does no good, because all the info over the PCIe bus is ultimately compressed to 16 lanes. Like a 6 lane highway that at some point gets reduced to 3 lanes. You know, the spot where traffic jams occur each day. It is a simple architectural limitation by Intel. The 2011 platform OTOH has 40 PCIe lanes available, so is much better suited for raid controller use and other PCIe cards.
    2. You have to make a distinction between mirrored arrays (raid1) and parity arrays (raid3/5/6), even when they are striped (raid10 or raid30/50/60). When a disk fails in a mirrored array, it is a matter of simply copying the data from the mirror to the failed disk. That is pretty fast, since there is nothing else to do than a simple copy of data. With parity raids it is a different matter altogether. With a distributed parity raid, like raid 5 or 6, rebuilding the array entails reading and writing to all (n) disks, not just to copy data, but also to regenerate the parity info on all disks. This can take up to several hours, during which time you will notice sluggish disk performance, but you can continue working. On an array with dedicated parity, in essence the work is more or less the same, unless you are lucky enough that it is the parity disk that failed. In that case only one disk needs to be rewritten based on reading (n-1) data disks and that is pretty quick, although slower than the copy of a mirrored raid, because of the parity calculations. If one of the data disks failed, it means reading from all disks in the array and writing to (n-1) disks, so the rebuild of raid3 is faster than raid5/6. Still you will notice sluggishness on a degraded array.
    3. WD Black is not advised in parity raids because of the WDTLER limitation. Personally I'm not fond of Seagate, because in the past years I had 7 out of 7 Seagates fail. A 100% success rate . Admitted, they were from the notorious 7200.11 batch, but still 5 had already been exchanged under warranty, so one could say I had 12 failures out of 7 drives, that is not very good. Hitachi 7K3000 series on the other hand are often among the approved list of disks from raid controller manufacturers and lastly I have had great results with the Samsung Spinpoint F1 and F3 series in parity raids. Good raid controllers all use PCIe 8x slots, which makes them not really suitable for the 1155 platform as noted under 1. Good brands are LSI and Areca, especially the incrementally improved Areca ARC-1882iX-16/24, while we are still waiting for the new PCIe 3.0 Areca line, expected around Q2/2012.

     

    Areca has one huge disadvantage, it is costly, but several benefits: It is the Rolls-Royce under the raid controllers, the only one to offer raid3 capabilities and the only one that allows expanding the cache memory up to 4 GB, where others are limited to 512 MB.

     

    Short history: When I first started using raids, back in the dark ages, I started out with a two channel Promise card. It was cheap, but as I soon realized, also crap. So I upgraded to a four channel 3Ware card, but that proved to be too limiting, because of only 4 channels. So I upgraded to an Areca ARC-1680iX-12, thinking 12 channels would suffice. Now I'm awaiting the arrival of the new PCIe 3.0 line of controllers, but will opt for the 24 port model (plus 4/8 ports over SFF-8088).

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 12, 2012 7:36 AM   in reply to StarMarc

    Marc,

     

    Regarding your questions:

    1.)  Can you list some "proven, high-quality" drives?? Would the barracudas apply?? the WD blacks?? 

     

    Sorry, but not really. I can only say stay far away from Seagates that end in .11; they are definitely flawed. I personally use: WD's Blacks and RE3s and Hitachi 7k1000.c and 7k3000.

     

    2.)  When you mention "choose a good 8 or more channel card" you mean PCI x8 right?? (sorry, im still learning my way though this) 

     

    No, I meant connections for 8 or more drives.

     

    3.)  Im assuming SSD's in RAID configuration still isnt optimal with video right??

     

    Correct

     

    Jim

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 13, 2012 10:22 PM   in reply to StarMarc

    I can give some times for RAID initializations and rebuilds.

    I have an Areca 1880ix-12 with standard 1GB memory on the card, running an 8-bay tower of 2TB WD RE-4 disks. (WD2003FYYS)

     

    It took just under 40 hours to build a 7-disk RAID3 (12TB) with one set as hot-spare. I never tested a rebuild on that array, and I also never built an 8-disk RAID3, because I was afraid it would take even longer than the 40 hours it took to build it and I didn't have that much free time. Sustained data throughput speeds were 759MB/sec write, 696MB/sec read. This is with the cache disabled so as to represent actual disk speed.

     

    It took 4 hours 54 minutes to build an 8-disk RAID6 (12TB). Sustained data throughput speeds are 816MB/sec write, 714MB/sec read, cache disabled.

    After loading all my media (5.75TB, almost half capacity) onto the RAID, I pulled a disk to simulate a failure, waited a half hour, then reinserted it to force a complete rebuild, which took 7 hours 58 minutes.

    During the rebuild, I continued to edit a movie project I'm working on (shot on 5DMkII, 7D and D7000, so same H.264 stuff you're using) with no problems. I also ran another speed test while it was rebuilding to see how much speed I was losing. The results were 488MB/sec write, 191MB/sec read with cache off, and since I was editing with the cache on, I ran that test, too: 482MB/sec write, 3535MB/sec read using a 16GB file to perform the test.

     

    I was surprised that the RAID6 was faster for me than the RAID3, given the data/parity ratio. 7 disks in RAID3 means 6 disks are striping data and one is writing parity. Likewise, 8 disks in RAID6 means the same 12TB data size, but the stripe is going to all 8 disks along with parity data, and allowing 2 disks to fail, similar to my RAID3 with hot-spare. (The hot-spare would take over for the failed disk and rebuild, giving the same 2-disk failure protection before data is lost, provided another disk didn't fail during the rebuild on RAID3.) I still wonder how it would have compared if I'd used all 8 disks in RAID3 instead of saving one for hot-spare.

     

    All this says is that there are different ways to skin a cat. After I finish my current projects and I have a week or two to spare, I'll do more tests with RAID3, but for now I feel very happy with a RAID6 that gives awesome performance and redundancy. Hopefully, this gives you something to work with as you plan and implement your RAID. Good luck and happy editing!

     
    |
    Mark as:
  • Trevor Dennis
    5,918 posts
    May 24, 2010
    Currently Being Moderated
    Jan 14, 2012 1:59 AM   in reply to wonderspark

    Those are impressive times. So does that one array handle everything outside of the OS, or how do you arrange things with such a system?

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 14, 2012 1:49 PM   in reply to Trevor Dennis

    Currently:

    12TB RAID6 - Media, Exports, Previews

    2TB AID0 - Cache scratch

    Externals - backups and clone

     

    RAID6 is inside a Sans Digital TR8X, connected via 2x mini-SAS cables.

     

    I have been tinkering with the four remaining internal drives. They are the original 7200rpm 640GB drive with the OS and programs, and three 7200rpm 1TB drives.

     

    For a long time, I had the 3x1TB in AID0 for scratch, which got 330MB/sec read and write. Recently, I wanted one more disk for backups (too many projects!) and decided to try out using two in AID0 for cache scratch, and putting preview files onto the RAID6 in another folder. The 2x1TB AID0 scratch gets 215MB/sec, and performance seems the same when editing.

     

    For backups, I have a bunch of single disks and external enclosures that I attach via cable or a Voyager Q for the bare drives. Well, and that one 1TB internal that I'm temporarily using for one active project. I got a couple little WD My Passport 1TB drives for third backups of very important projects, and I also have a clone of my OS.

     

    I could probably put all scratch, previews and media on the RAID6. I may test that out next, since it *is* pretty fast. I chose the internal disks for cache scratch since I have smaller write blocks there. On the big array, the block size is the max, which I think is 128k, whereas the smaller internal array is a default size like 32k or something. Since many of those media cache files are only 4k, and the rest are mostly under 200MB with two 1.4GB and seven 3.3GB files, it seems logical to keep them on a different array that takes advantage of smaller files. The preview files are all over the place in size, from 4k to over 3GB as well, but I tend to delete them quite often, and not even create previews most of the time, since my work has been playing real-time without any rendering. Most of my work is DSLR footage right now, and only one or two effects laid over them, like color correction and such.

     

    I also have 32GB of RAM, which may help. The end result is smooth and painless!

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 15, 2012 1:19 AM   in reply to StarMarc

    Let me be sure you're on the right path.

     

    You have indicated getting an ARC-1213-4i, which includes one SFF-8087 connector. That is good for four disks. You mentioned building a 4-member RAID with an extra disk for a hot-spare. That card won't help you there, because 4+1=5 disks, so you'd at least want the ARC-1223-8i to run more than 4 disks. You get one disk per port, thus a 4-port card runs four disks, 8-port runs eight, and so on.

     

    If you're going to put those disks in an EXTERNAL enclosure, you'd be better off getting the ARC-1223-8x instead. (x=external, i=internal connectors for Areca cards, and ix=both connectors are on the card.) The reason for this is many vendors include the SFF-8088 cables with the packaging of their enclosures, which are the connector type used by those external enclosures. (SFF-8088 is external connector, SFF-8087 is internal connector.) If you're planning to buy an internal port card, you'll need special cables that cost $50-60, and then run them out an empy slot or other hole in your computer to get out to the RAID box. The only reason I can think of to do this is to take advantage of a card with more than 8 ports, like the ARC-1880/1882-ix-12/16/24 cards. (I have an 1880, but now Areca has a newer card, the 1882 series.) Furthermore, you don't want those mini-SAS cables to be longer than one meter, or you could have issues with data (so I've heard.)

     

    You can get the 8-port card all external, but more than that, you get internal ports and would then run them out via those special cables. I did this with my setup because I wanted to be able to take advantage of the larger memory (up to 4GB, though I still have standard 1GB) and be able to attach up to 16 disks with my 1880ix-12. (This is a bit confusing, but I emailed back and forth with the techs at Areca in Taiwan, who assured me that despite the card being labeled a 12-port card, it runs with all three internal and one external connections used as 16 discreet ports. I'm not 100% convinced yet, as I haven't actually hooked up and tested speeds with 16 disks.)

     

    Hope I haven't lost you, yet.

     

    To answer your latest questions:

    1. The results should scale somewhat with the disk size, but also consider the sustained data throughput of the disks you buy. I have 2TB WD RE-4 which have a max throughput of 138MB/sec. The 500GB version is lower, at 128MB/sec, so add some time to the build/rebuild there, and also reduce the total speed you'll get from the RAID by 10MB/sec per disk as well. Considering you want five of those 500GB disks, you need at least an 8-port card, regardless of how you set up any RAID3/5/6 with or without hot-spares. What *I* would do with 5x500GB disks is try a RAID3 with all five if you have time, and test it. Then try the same with RAID6, and test it as well. If you have a *lot* of time, you'll also want to test the rebuild times by pulling one disk out of the array to force a failure/rebuild and see how long that takes, as well as test speeds during the rebuilds.

    2. You'll be limited not by SATA, but by disk transfer rates, and things like where you put the RAID card. The Areca card you want is x8 lane PCI 2.0, so you'll want to put it in a PCI slot that has x8 lanes (or more) at PCI 2.0. If you stick it on a PCI 1.0, or say an x1 or x4 lane slot, you'll potentially cripple your speeds that way. Put it this way... I tested all eight of my disks in AID0 and got sustained 1100MB/second both read and write speeds in those same disk tests I listed earlier. If you do the math, dividing 8 disks that have max speeds of 138MB/second, you can see that's as fast as it can possibly be in that configuration. 138x8=1104. So forget about SATA 2 and all that. This is different, because you're combining speeds by writing to many disks at once. Think more in terms of PCI lanes and their speed capabilities, combined with disk transfer rates multiplied together.

    3. I have done that in the past, and not *noticed* any difference. There could be a difference, but I wasn't able to quantify or identify it. Right now, I have my previews on my RAID6, and cache files on another internal AID0. This relates to my theory given data block sizes. My RAID6 has the biggest block size it will do, which I think is 128k. The internal one is smaller, maybe 32k. I figured since so many cache files are only 4k, it would make more sense to send those smaller files to that array.

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 16, 2012 1:57 PM   in reply to StarMarc

    A disk set aside to be used as a spare might be called a "spare," whereas if it's in the RAID and assigned by the RAID card as a "hot-spare," it will not read or write any data until a disk in the array fails. You are correct on that now. I've seen it referred to as a " 7+1" in the example of 8 disks in a box, with 7 in a RAID and one assigned as hot-spare.

     

    If you really can't push another hundred or two hundred dollars for an 8-port card, then you'd be better off using four disks in your RAID and setting aside that fifth disk for a spare. You'll most likely be right there if/when the alarm sounds for a failed disk, and you can swap it quick and easy. Any decent RAID will tell you which disk has failed. It will be fairly obvious based on the lights on the front, and the Areca management software tells you as well. You'll want to get to know your RAID box, and which slots are represented by which lights on the front as well as which slots are represented by the management software.

     

    The thing about real-world experiences and benchmarks is that they go together. I was having some stuttery playback prior to building my system up, and I did my best to pinpoint where the issues were and what solved them as I went along. I upgraded my RAM from 16GB to 32GB, and that finally stopped my page out / swap file issues. I also ran disk speed tests at every point to see how they reflected what I was seeing. It's hard to quantify exactly what changes are making real-world improvements, and I feel that those tests and benchmarks help identify what is working and what is making no difference. I changed the RAM and the RAID pretty close together in time, and I forget which I did first now, but I recall going from 3-disk AID0 to 7-disk RAID3 made my DSLR playback nice and smooth. I also know that I had a couple system hangs, and I related them to the page outs / swap file useage, which was solved by 32GB of RAM. Finally, I moved from a quad core CPU to a 6-core, and to be honest, it was the least noticable in real-world improvement, although it is noticable when rendering After Effects. I have the CPU allocated to using 8 threads for Pr/AE/En and the remaining 4 threads for other processes. I have had no system hangs or issues since these three major changes, so all-in-all, they fixed me up, and I feel pretty confident in what helped where.

     

    In my case, I needed the larger RAID anyway. I have multiple projects that run concurrently, and can't be swapping data around all the time. I currently edit two feature-length movies and several smaller commercial projects day-to-day, and have only half my 12TB RAID filled, which is most useful. It also keeps me from spreading myself *too* thin, as one guy can only do so much work at a time! I find myself tempted to get another box to throw more disks into, but I really don't need it just yet. I'm glad I have the ability, however. This is why I hope you're not limiting yourself too much with a 4-port card.

     

    I suggest you run tests and take note of your issues with your current setup, and document each upgrade and how it changed both tests/benchmarks and observable results.

     
    |
    Mark as:
  • Currently Being Moderated
    Jan 18, 2012 9:25 PM   in reply to StarMarc

    I don't know that I could figure out a target speed for sustained transfer. I think there are more variables at play that make it a mess to calculate. I just know that when I was editing a 104 minute movie shot on P2, a simple 3-disk AID0 was fine. Then, when I started this similar length movie shot on 5D, it started to choke. I seem to recall some slight hiccups when I pulled a disk from the RAID to force a rebuild during tests. My sustained speeds took a hit to 488MB/sec write and 191MB/sec reads with cache off, and I was still able to edit just fine, but I want to say there was a time or two where playback would be stutter slightly. It would make sense, so perhaps a good target is 500MB/sec sustained reads and writes, if your timeline is like mine currently... 5 video & 10 audio tracks with effects and such which is not rendered. (I like not having to render things!)

     

    I needed more space anyway, so a large and fast array was an excellent choice for me. I looked closely at my RAM stats and saw the page outs and swaps going on, so I knew that 32GB was another great move in my case. I tested renders using various core/thread settings in After Effects, and realized that a 6-core would speed things along as well. All these things combined not only made editing this 5D movie a joy again, but allow for expansion when I finally get some RED footage to work with. I hope and believe it's good enough for that.

     

    I know you want a clearer answer with a hard number, but I don't think I can provide it. For that, I apologize. Based on your similar need to edit RED in the future, I can't help but think you'll need a robust RAID to make that experience smooth and confidence-instilling. I saw someone say you don't need that much power to edit RED, but maybe they only edit straight footage without many effects or layers. I don't know. I throw effects and layers around like mad, and I don't have to render anything at all. I don't even have a CUDA card in my system right now either, because my 5870 seems to work better than my GTX285 did. (I'm on a Mac and have limited choices.)

     

    You will be happy with an 8 or more port card over only 4, this I can assure you.

     
    |
    Mark as:

More Like This

  • Retrieving data ...

Bookmarked By (0)

Answers + Points = Status

  • 10 points awarded for Correct Answers
  • 5 points awarded for Helpful Answers
  • 10,000+ points
  • 1,001-10,000 points
  • 501-1,000 points
  • 5-500 points