Have a look at these guides:
The picture above is linked to in the PC Buying Guide. The storage guide is the other one you referred to.
Up to now there is not a thing I would do different yet. The only doubt is, would it have been wise to install hot-swappable drive cages instead of fixed ones. My current solution has push-pull fans for each drive cage and I would lose that push fan when choosing hot-swappable cages. And currently my feeling is that I better use full cooling capability and forfeit hot-swappable bays, since with my raid30, I don't want to exchange disks.
I couldn't get all the front side disks on the picture, but this should give you an idea.
The rather large Noctua cooler with the push-pull fans on both sides. The fan at the bottom is mounted on the PSU to create extra airflow over the graphics card, because the Areca card right under the graphics card disrupts the airflow. The red SATA cable visible in the top of the picture is where I have three other disks, right above the mobo.
Hope this helps, Glenn
And just to the right of frame (out of shot) you can see my neighbour's house which houses the 10K three-phase power supply.
I definitely no longer feel worthy..
Like I said, "not the most common setup", but why should you feel unworthy? I mean, not everybody has an extravagant system like this and then my system is only - as I said before - "middle of the road" to start with. There are so many much more powerful configurations thinkable, but it ultimately boils down to the basic question:
What will be - given your workflow - the best system for your money? That will determine what is best for you, not the fact that there are even more powerful system out there and will be in the future. OK, I admit I did not like having only the third ranked system on the PassMark test, which means there are two systems even faster than mine on that specific benchmark, but what does it contribute on the baseline?
What I do hope by sharing my experiences and showing what I did, is that you can make better founded decisions on your configuration and learn from my mistakes, so in the end you will have a better performing system.
There is one thing I have been thinking about to change in my system, but realistically, it is not worth the money at the moment for me.
That is exchanging some drive cages, so I can keep my 14 disks in front and 3 in the back, but get another 5.25" slot free. I don't want to give up either of the two BR burners. That extra free slot could be used to house 4 x 2.5" SSD drives in an Addonics cage, making them hot-swappable and giving me an additional 4 x 256 GB storage space. However with the current cost of SSD's this is merely dreaming, although I have the connections for it on my Areca controller.
You asked what I would change, well, this is maybe for the future.
The really neat thing about Harm's equipment is that most of the components have little LED's. He no longer has to string all those holiday lights on his house - he just opens the curtains!
An outstanding rig without doubt. But there again I guess you operate a fairly heavy workflow, HD output and the rest... I think this has to be seen in context to the work you do and results you demand. Yes, it is about pushing the limits of your resources, simply defined in terms of cash and technology.
Personally, I got what I could afford - or felt was an amount justifying usage, that is, the value of the system. My budget was about £2,000 which included hardware AND software in order to start me off producing videos (again) on a second-hand Sony DSR-170P.
So there should be no feeling of inadequacy if you have the basic stuff if it does the job and you are getting results. But you do get sucked in... I know I will be moving on to HD and then the fun starts. Better has always demanded "bigger" or more expansive resources.
ASUS P6T with the i7 920 is (currently) a good place to start. Even now I am upgrading my 3GB of 1600mhz RAM to 9GB (in anticipation of HD to come). When I can get around to it, to change the HDD system to RAID 10 from the precarious RAID 0 I currently use, which means another pair of 1TB HDDs - and being in the UK it means paying at least 20-30% more for my stuff compared to the US.
I have to disappoint you, This is not a LAN-party machine, much too heavy for that, so there are no leds installed, apart from the power LED and the disk activity LED.
The major advantage of the internet is that it allows you to shop worldwide. You are no longer limited to the UK, where prices are often extravagant in comparison to the US or parts of Europe. For instance, digital still cameras are generally attractively priced in France, media in Germany and hard disks in the Netherlands. Video equipment is attractive in the US and New Zealand, the latter being part of the UK has specific advantages, meaning no import duties or shipping costs. Just shop around and you may well find attractive deals in other countries. However, in the current crisis time, there are certainly deals to be found in the UK, that may even make it worthwhile to travel to the UK to get certain equipment, instead of buying it here in the Netherlands.
Point made, and taken. But for every purchase made OUTSIDE the EU a declaration is required and duty is payable. In addition, have you tried to ship faulty goods back to wherever? Here in the UK - no questions, arguments - items are replaced often by return of post. Better the devil you know
I will give it a better look after work.
Now you need those HRD that you posted.
Those drives look interesting.
I will save the pics so i can reference them.
Harm - very useful information. I have a couple for you:
- What Seagates are you using? the ES drives or the run-of-the-mill? Did you have any trouble setting these drives up in RAID on the Areca card?
- [I hate asking this but I will try] Why RAID 30 and not 50 for example or better not RAID 10?
- Are you "raiding" everything on the Areca? or is your RAID 0 off the mobo?
- I am also trying to build using the same case. I have contemplated fitting in something like the supermicro CSE-M35T-1B Hot-Swapable SATA HDD Enclosure (actually 2 or three of these), but what is really hot swap good for? Sacrifice great cooling and silence for a very rare failure - especially that this is not a server (not running 24/7).... so what if a drive goes you can still manage with cables - or - just shutdown and deal with the drive and then rebuild... Unless, I suppose you are in the middle of a long render and the hot swap can come in handy. May be I just answered myself: If you are building a rendering machine then hot-swap makes more sense. .... any thoughts here? Why do you desire a "hotswap"? (now that sounds like an ad for a strong alcoholic beverage)
1. If you look at the picture of my system, you will see that all the storage disks are standard Samsung Spinpoint F1's. When setting up this system I had the choice between WD Caviar Black, Seagates, Hitachis or Samsungs, at least these are the common brands. Based on test results available on www.storagereview.com that choice was actually limited to Samsung or WD Caviar Black. Seagates were too slow, noisy and I have had personal experience with Seagates (7 x 1.5 TB) in my NAS, showing 5 out of 7 failing within two months. That failure rate precluded Seagates. The Hitachis also did not score as high in the tests. Since I needed 16 disks, price was ultimately the deciding factor. Then it was simply plugging these disks into the multi-lane connectors on the Areca and setting up the array during the boot process. The most cumbersome part was the waiting while the disks were formatted, it took hours.
2. When deciding on the array, I chose for raid3, because it is generally advised for video editing and has a low overhead if the array needs to be rebuild. Raid3 is better in sequential reads than raid5, and in video editing that is a distinct advantage. By striping two raid3's you get a raid30, in this case 2 raid3 of 6 disks each striped to a raid30. Raid10 never entered my mind because of the extremely high cost. A raid10 would have given me only 6 TB of storage, whereas I now have 10 TB effectively. And what are the chances of two disks in each raid3 failing at the same time? I consider that chance so small it was not worth it for me. Now, when I have a drive failure, I have to identify which disk it is, exchange it, define the new disk as a hot-spare, then expand the degraded array with the newly defined hot spare and the array will be rebuild in the background, while I can still continue working. See below:
3. The raid0 is indeed run off the Marvell chip on the mobo with the SAS connections.
4. If I were to use two Supermicro hot-swappable drive cages, instead of the Lian Li cages I currently use, I would have space for 16 disks in the front. Apart from how it would look, it has the advantage of (Bill will love this) individual LED indicators for each drive, enabling an easy way to find a faulty disk. Additionally, it makes it easy to hot-swap the single Final Project disk. The downside is the reduced cooling. For me it is not very significant, since I also have the NAS and storage on the local server and backups on another server, connected through a VPN.
Hope these considerations help you.
Thanks for the 'answered' points, Glenn.
Harm's new name is Mr. Storage. That rig needs to be featured on CNET!
Sounds like your raid controller cost more than my i7 cpu.
I'm using Western Digital Raid Edition terabyte drives which have a million hours mean time between failure. Helps me feel more secure running a raid 0 array. Of course, I back up often on other drives just in case and don't use the raid for long term storage, just editing.
I am curious about what you do with all that incredible video PC power and storage.
Especially since I have been using a 4 year old piece of junk to process over 8 hours of new footage per week.
And then you have the time to help out all of your needy friends.
Sounds like your raid controller cost more than my i7 cpu.
That is probably correct. Newegg has a price of $ 950 for the card with 2 GB cache but excluding the BBM, which sells for $ 105. You can also see that the difference between 2 GB cache or 4 GB is around $ 400. That IMO is extreme.