9 Replies Latest reply on Mar 8, 2013 1:35 PM by alexdejesus

    RAID 0 array of SSDs: not a bad idea, but not easy

    hpmoon

      I thought you all might find this interesting, and in any case, I'd enjoy hearing feedback as you might also audit these conclusions based on your own expertise with PC systems.

       

      I'm pushing technology to the furthest limits at all times, because I'm grinding through highly compressed 28 Mbps AVCHD 1080p footage while applying numerous layers of GPU-accelerated effects, onto multiple tracks and camera angles inside the Adobe Premiere Pro CS6 workflow.  We all know that one of the critical ingredients to surviving in that context is disk access speed (complementing CPU power, which for me is an Ivy Bridge 3770K overclocked to 4.4 GHz) -- and this is one area where a RAID 0 array of SSDs actually results in meaningful performance gains.  For people playing games, not so much.

       

      The dilemma, taking my Gigabyte GA-Z77X-UD5H motherboard as a typical example, was this:  I wanted my RAID 0 SSD array (two PNY XLR8's comprising half a gigabyte) to co-exist with my Vertex 3 SSD boot disk.  The typical Z77 motherboard only has two Intel SATA-III ports, and those ports are the only ones (to my knowledge) that offer native TRIM, while the supplementary Marvell SATA-III ports do not.  Since junk collection (of the kind that supplements any modern SSD's own internal junk collection) is more critical on the boot drive, and is almost useless on a mostly read-only drive, I didn't have the option to move my Vertex 3 SSD boot disk over to those Marvell SATA-III ports.  In the meantime, I did try out the RAID 0 SSD array on those Marvell SATA-III ports, which are supposed to provide a theoretical headroom of 1.2 GB/s (i.e., 600 MB/s x 2), but I got a stunningly offensive, ridiculous, laughable 350 MB/s or so read rate -- from a RAID 0 SSD array!  Those PNY XLR8's are rated at around 500 MB/s read speeds on their own!

       

      Bottom line, I felt that the optimal compromise was this:  move the boot SSD onto the Intel SATA-II ports, hitting up against the bottleneck of 300 MB/s (whereas the drive normally performed at around 500 MB/s on the SATA-III port).  Then, assign the RAID 0 array to the two native Intel SATA-III ports.  No joke, in that configuration, using ATTO Benchmark, I got over 1.2 GB/s read speeds.  That's a godsend for constantly pulling multiple compressed HD video source files (many of them over 1 gigabyte in size) into any complex video editing workflow.

       

      I haven't found conclusive proof, but I suspect that the motherboard's Marvell SATA-III controller is grabbing (for life) onto a singlular PCI-E x1 lane, leaving way less bandwidth than necessary to live up to even one single 600 MB/s SATA port spec, let alone the aggregate of several SATA-III ports.  Does that sound right?  And moreover, what moron would engineer things this way, calling a port SATA-III when it performs sub-SATA-II?

       

      I also suspect that this is fairly typical on today's best Ivy Bridge motherboards.  Thoughts?  We are all obviously trying to avoid the clutter of adding a dedicated RAID card, which would probably underperform in comparison to the native Intel controller anyway.