Hello forum gurus,
Sorry for the long post. I am currently experiencing challenges with my brand new ARC-1883IX-24-8GB.
This is a new system build.
My system specs are as follows;
Xeon E5-2630L v3 x2
Corsair AX1200i PSU
SFF8643 to SATA fan-out
Runs latest 1.52 firmware
10x 4TB WD40EFRX
6x 1TB WD1003FZEX
6x 1TB WD1001FALS
After first building the system I experienced the following symptoms;
The raid card was able to see all the 4TB drives, but only 6 of the 1TB drives; specifically the FZEX drives
Thinking this could be an issue with the card supporting older drive firmware, I tested a cold standby FZEX drive.
If this was merely a drive firmware issue, then I should have then seen 7x 1TB FZEX drives.
Instead plugging in the new drive brought the 6th FZEX drive offline; keeping my total at 6x 1TB drives.
At some point during the troubleshooting, all 10 of my 4TB EFRX disappeared from the card. I have not been able to make them reappear.
I plugged in a cold standby 4TB EFRX drive into one of the same ports, and it did show up.
I also plugged in the 1TB FZEX cold standby drive again, and this brought my total 1TB drives up to 7.
I found two links which initially sort of sounded like the issue I was having with the older drives;
The answers were somewhat cryptic.
I did find an "Active Cable Management" section in the "advanced configuration" tab.
I forced it to 6G with no affect. I also tried forcing drive speed to 3G.
At one point I also tried setting active cable management to disabled. (didn’t think this would matter as the guide says this only comes into play when using a 8644 cable)
At some point during the reboots of these changes the 4TB drives disappeared.
So my current status is that I still cannot see any of the 4TB drives I could in the beginning, and I still only see 6 of the 1TB FZEX drives.
Both cold standbys are offline.
For troubleshooting purposes I also plugged all drives into a separate older RAID card, as well as leveraged the onboard SATA ports of the motherboard. Each time leveraging the same power cables, all drives spun up and were accessible. I booted up the system once using FreeNAS. All drives displayed properly and I was able to wipe all of them, and read/write to them. I have tried all the PCIe slots on the board and have tried multiple power connectors to the RAID card itself. So the problem seems to be confined to the RAID card itself. The power and all the drives have been proved out.
Anyone know how to reset the "configuration" on these guys?
Anyone have any thoughts on this? It’s a real doozy. I thought buying the more expensive card meant this would be an easy build. Silly me.
Also put in an email to Areca Support.
Thanks for any help.