Hi.
One of my drive lights blinks constantly and I've been unable to find out why. When I boot the server, the blinking starts immediately, even during POST. The behavior will go on and off intermittently while FreeNAS is booting, then stop once the boot process has completed. The drive in question is an SSD in a simple pool with just two SSDs mirrored together. The only thing this pool is used for is to serve up some iSCSI targets for my XenServer host. The drive light will stay off until I start one of my VMs. The blinking will stay constant regardless of drive activity of the VMs (the drive light acts as expected with the other drive in the pool) and remain that way even if I shut all my VMs down.
I had a different problem with this same drive in that I was getting messages saying that the GPT table was corrupt or invalid. I offlined the disk, destroyed the GPT with gpart, wiped the disk, then resilvered it. I'm not getting that error anymore and the volume status page reports no errors of any kind.
As far as I can tell, the disk seems to work ok, but I can't help but wonder if there's some other problem that needs to be addressed. I've done a good bit of searching on the internet and can't find any solutions, so I'm a little flummoxed at this point. Have any of you seen this problem before?
Hardware:
Motherboard: Supermicro MBD-X11SSH-LN4F-0
CPU: Intel Xeon E3-1225 (Skylake)
RAM: Crucial 16GB (Model CT2K16G3ERSLD4160B) - 4 sticks for total of 64GB
Power Supply: Seasonic SS500-L2U (500 watt, Gold)
Case: Norco RPC-2212
Fans: Noctua NF-A8 PWM
Cabling: Norco C-SFF8087-4S
Drives:
Pool1: 6 x WD Red 4 TB 5400 RPM in RAID 10
Pool2: 2 x Samsung EVO 1 TB in RAID 1 (the problem child resides here)
Any insights you might have would be appreciated. Thanks in advance!
One of my drive lights blinks constantly and I've been unable to find out why. When I boot the server, the blinking starts immediately, even during POST. The behavior will go on and off intermittently while FreeNAS is booting, then stop once the boot process has completed. The drive in question is an SSD in a simple pool with just two SSDs mirrored together. The only thing this pool is used for is to serve up some iSCSI targets for my XenServer host. The drive light will stay off until I start one of my VMs. The blinking will stay constant regardless of drive activity of the VMs (the drive light acts as expected with the other drive in the pool) and remain that way even if I shut all my VMs down.
I had a different problem with this same drive in that I was getting messages saying that the GPT table was corrupt or invalid. I offlined the disk, destroyed the GPT with gpart, wiped the disk, then resilvered it. I'm not getting that error anymore and the volume status page reports no errors of any kind.
As far as I can tell, the disk seems to work ok, but I can't help but wonder if there's some other problem that needs to be addressed. I've done a good bit of searching on the internet and can't find any solutions, so I'm a little flummoxed at this point. Have any of you seen this problem before?
Hardware:
Motherboard: Supermicro MBD-X11SSH-LN4F-0
CPU: Intel Xeon E3-1225 (Skylake)
RAM: Crucial 16GB (Model CT2K16G3ERSLD4160B) - 4 sticks for total of 64GB
Power Supply: Seasonic SS500-L2U (500 watt, Gold)
Case: Norco RPC-2212
Fans: Noctua NF-A8 PWM
Cabling: Norco C-SFF8087-4S
Drives:
Pool1: 6 x WD Red 4 TB 5400 RPM in RAID 10
Pool2: 2 x Samsung EVO 1 TB in RAID 1 (the problem child resides here)
Any insights you might have would be appreciated. Thanks in advance!