Have a look here: Adobe Forums: To RAID or not to RAID, that is the...
It explains cluster size and block size. A raid5 on a IHCR10 chip is relatively demanding on the CPU. You get better performance with a hardware raid controller but at a price.
Harm, may I ask a relevant to RAID topic question? Similar to the author of this thread I have ASUS P6T mobo and I wanted to use the south bridge chip to Raid-0 two of my hdds. Now, this require dthat I change the hdd settings in the BIOS from 'ahchi' to 'raid'. As a result I managed to link the two drives in raid-0 but the remaining single 2 hdd (which i did not want to raid) were not working after the 'raid'-setting change in the BIOS. This tells me that if one wants to use the southbridge IHCR10 chip of the mobo to raid one's drives one has to raid all of them as it is not possible to have just one pair in raid-0 and the rest operating as regular 'ide' or 'ahci' drives. Is this correct?
If this is correct, then if one installes a dedicated raid controller, can one have raid (operated by the controller) and non-raid (operated by the mobo) drives on the same system?
The temporary solution I found from the above predicament is to use the ASUS 'drive expert' chip which allows one to have two drives in raid and the rest not in raid. Then I dedicated the raid drives to 'scratch discs' which improved my overal CS5 performance.
If you raid 5 off the onboard controller be prepared for rebuilding your array often. The current drives shipping often have bad blocks and the onboard controller gets confused when it tried to address data to a block that turns up bad. Eventually after the drive continues to try and write to that location to long, the raid controller thinks the drive has gone bad and marks the drive bad dropping it out of the raid. It can happen with raid controller cards but it is rare on them. Onboard controllers though have this problem constantly. I would not do a raid 5 off the onboard for any reason. If you must then write 0's on all your drives before you build the array and initialize it.
I haven't tried this, so I don't know the answer. I also use a P6T mobo, but my on-board raid controller is the Marvell chip, not the Intel IHCR10, so I have never experienced these issues.
My guess is that it should be possible to have two disks in a raid and the others as a pass-through disk, but it requires a BIOS setup to achieve that. You may have to have a closer look at the mobo manual.
Depending on the configuration, one can have, as I have, up to 6 individual disks or raided disks on the mobo SATA connectors plus 2 disks in a SATA raid0/1 configuration on the Marvell chip, all on the mobo and then have multiple arrays on a dedicated raid controller.
I have never had an issue having some disks raided and others not on the ICH 9 or ICH 10. I use the Intel Management software to set the raid up in Windows. I am not sure why you are having a problem having both. I am not sure which P6T board you have but all of the P6T boards I have dealt with handle both configurations without issue.
I have this mobo Asus P6T Socket 1366 Intel X58 + ICH10R Chipset CrossfireX
The manual suggests that in order to setup a raid0 config using the ICH10R, I have to go into the BIOS and under 'Storage Configuration' to 'Configure SATA as' as 'RAID' which affects all the installed discs! You cannot configure just two drives as 'raid' and keep the rest configured as regular 'ide' or 'ahci' (ide or ahci are the two remaining options for hdd config). So, if you make this RAID adjustment in the BIOS postfactum (that is when your system has been already been setup as a non-Raid one) then the BIOS changes affect all of the hard drives. Once I reboot, the system does not recognize properly my other non-raid drives (which is not surprising as they have been congigured as 'raid' drives in the BIOS and in reality they are not raid drives). This is where the problem stems from in my case.
most asus mobo have similar raid options. once you have selected the drives for raid in the raid bios setup. restart the pc, and make sure in the boot options that your OS drive is selected for booting, sometimes when changing from ide, achi or raid, the boot options change the drive to boot from.
1) Hmmmm good to know. I'll take a look at the load of the RAID5 on the CPU, I could always change it to RAID0 knowing that I'd be loosing the parity but I'd only have media files on it and worst case scenario would be dumping the files back onto the rebuilt RAID from the bluray backups. The RAID controller cards with more than 4 ports are quite expensive. The most important thing for me with this system is realtime editing and colour grading.
2) I've read that post over a few times trying to absorb though osmosis. Not being overly technical guy I was wondering if there was a preferred block/chunk size to format the F3's to optimize them for video in either RAID0 or RAID5. I'll talk to the builder and see what he suggests.
Just because the controller mode is set to raid doesn't mean you have to raid all disks. Controllers have what's called a Pass Through function that allow non raided disks to function as individual disks while others are raided. Just set up the disks you want raided and leave the others as individual.Then go Into Disk Management and intialize all of them. They should all show correctly once you set up the raid disks as you intended.
So would RAID0 put the same demands on the CPU as RAID5 if one was to use the motherboard to RAID the drives?
1 person found this helpful
Raid5 requires parity calculation, raid0 doesn't, so raid5 requires more from the CPU.