Looking at one of Harm's replies to the last time I had this same problem he said, in part:
Now turn off your PC, remove the failed disk and insert the new one, reconnect the cables and restart. It will still show a degraded array. Does not matter, since it is now easy to solve this. In the BIOS or the Raid Storage Manager (from Windows), define the new disk as a Hot Spare. Let it finish this process. Then use Expand Raid Set to include this Hot Spare into the Raid. It will rebuild automatically.
That's all, but it may take some time. Good luck.
Okay. I am in the Areca Raid storage manager.
Under Select the Drives for Hot spare I see under "Slot 3, that I can make it a....
GLOBAL HOT SPARE
Dedicated to Raidset
or Dedicated to ENclosure.
I selected "Global Hot Spare".
Then under Expand Raid set I see:
Raid Set #000 (my only raid set) shows Member Disks 4/4 and Raid State "Rebuilding". Capacity 8000.0 GB
There is a pre-selected radio button (the only radio button on page). But when I hit "Submit" It says in red "No disks with enough capacity available for Raid Expansion".
I assume, then I will just have to wait for The Raid (state) to finish rebuilding -- and *then* I can hit the "submit button....
So you have a total of four disks. You have a RAID set that uses all four disks with no hot spares, right? You can't have a RAID30 on four disks, since each RAID3 requires three disks which are then put into a RAID0 stripe. I think you have a RAID3.
How many open bays do you have attached to the Areca? If you have a 5-bay or 8-bay box hooked to it, and you're only using the four disks now, you can put another drive into an empty slot. It sounds like you added the new disk to the 4-disk array already, and it automatically started the rebuild. That's why you can't add / expand the RAID set.
If you have empty slots beyond the four already in the #000 set, you can insert disks into them and make them hot-spares while it rebuilds, and the next time a disk fails, it will rebuild automatically without you having to touch it. Then when you replace the bad one, it should become the hot-spare. If not, you can then assign it as a hot spare.
I'd be interested to know how long that rebuild takes. Under System Controls>View Events/Mute Beeper, post how long it says it took in the far right column when the state returns to "Normal", or just post how many percent it is now along with how many minutes it reads in the rebuild column has lapsed, and I can figure out the rough rebuild time mathematically.
HI Wonserspark --
Thanks for taking the time to comment...
You must be correct -- I must have a Raid 3 (I'm not a raid guy -- I owe my raid building abilities to the incredible support here on the Adobe Hardware forum.
Yes, I have 4 extra empty spots in my system so as soon as the raid is fixed I'll make sure I always have a hot spare -- then I won't lose time in the future waiting for the raid to rebuild -- I assume.
I didn't pay attention to my Areca "System Configuration" panel when I put in my new hard drive. The "Background Task Priority" was set for Low (20 percent). After I started "rebuilding" Volume I went into System configuraiton a reset it to "HIgh (80%) and COnfirmed the Operation and hit the submit button.
So I am hoping that the rebuilding will be faster -- or once I start rebuilding is it "too late" to switch the "Background Task Priority".
It's been "rebuilding" for at least an hour and the "Volume State" is at only 2.8%....
Hopefully when I wake up in the a.m. it will be nearly finished -- especially if the 80% background Task Priority kicks in -- or am I completly wrong in assuming this???
Hee hee! I will take your advice and always have a "hot spare" in my box ready to take over.
I will keep you informed regarding the progress of the rebuilding of the Volume.
Agree with Wonderspark, adding the new drive as a hot-spare; this is the safest way since you have the extra 4 ports available.
Regarding the calculation for rebuild time, the information you need should be clearly in the log file including timestamps (i.e. start rebuild at... completed at...)
Sounds about what I would guess. 2.8% at 60 minutes means it will be at least 35 hours total to rebuild. I had a 7-member +1 hot spare RAID3 with the same card (1880ix-12) that took 38 or 39 hours to rebuild. I was amazed it took so long. I switched to RAID6 using all eight disks and the format/rebuild times dropped by 8x, and my sustained read/write times increased as well. I'll take 5 hours over 40 and faster performance any day!
Looking at the log:
Start rebuilding: 18:38:59
I just created a "Test event" --
And its time is 20:16:08
And in RaidSet Hieracy page: Current Volume state is 4.1 percent.
I never fooled with test events, so I looked it up and found this:
"Q. Under what conditions should I create a test event? A.You can use it to verify your SMTP or SNMP configuration. When you generate a Test Event, it will be recorded in event log, SMTP will send a mail, and SNMP will send a trap."
I think it will have no effect, and at worst only slow down your rebuild, which won't be done for two days anyway.
Well, the good news, even with 80percent background, I seem to be able to work on my premiere project...
Yep, that was the point of the RAID, right? (: You should run a speed test on the RAID while it's rebuilding. I'd like to see it! My new RAID6 looks like this during rebuild:
The first one is cache off, second is cache on. I never tested my RAID3 during a rebuild, and wish I had. It would be cool to compare them.
Can anyone recommend a good speed test software for windows 7 (64bit). It appears your speed test software is for the Apple OS.
Think all the PC guys use something called HD Tach, but it has to be run in WindowsXP mode, if you have Windows7. Maybe that's been fixed?
Well, the good news is that if you had all eight bays full of disks in RAID3, you'd probably have double the performance shown on that chart during a rebuild, which would be faster than my RAID6. If you had 7+1 hot spare, it would be about even, I think. Nice to know!
Is it done rebuilding yet?
Edit: "We've lost Gorgeous George."