Pool degraded, but disk is ok, what is the problem?

Status
Not open for further replies.
D

dlavigne

Guest
Please post the output of "zpool status" using code tags.
 

sandreas

Dabbler
Joined
Apr 20, 2012
Messages
23
Code:
zpool status -v
  pool: zfsroot
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
  scan: none requested
config:
 
    NAME                                            STATE    READ WRITE CKSUM
    zfsroot                                        DEGRADED    0    0    0
      raidz1-0                                      DEGRADED    0    0    0
        gptid/6e0fa469-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
        gptid/6e6b6135-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
        16691495089254086186                        OFFLINE      0    0    0  was /dev/dsk/gptid/9ed38462-2924-11e3-8dee-8c89a511e867
        gptid/6f3541bc-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
        gptid/6f91613f-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
 
errors: No known data errors
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
Have you replaced a disk lately and forgot to detach the old drive?
 

sandreas

Dabbler
Joined
Apr 20, 2012
Messages
23
No... i tried a "replace" via the web interface and followed the instructions. Then it failed with "wrong disk id" or something. I think it has to do something with the disk, which is the same as before and not a new one with another id, that is expected as replacement for a failed drive. It says "try -force" in the debug-command-line.

Access to data and shares is still possible atm.

All i want is just to resilver the pool and go on with a healthy raid-z.

What can I do?
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
How many disks were in your pool originally?
Can you also post the outputs of
Code:
zpool status
camcontrol devlist


EDIT: Forget about zpool status, it's already there ;)
 

sandreas

Dabbler
Joined
Apr 20, 2012
Messages
23
How many disks were in your pool originally?
5 disks, all WD30EFRX, ada1 - ada5


zpool status
Code:
  pool: zfsroot
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
  scan: none requested
config:
 
NAME                                            STATE    READ WRITE CKSUM
zfsroot                                        DEGRADED    0    0    0
raidz1-0                                      DEGRADED    0    0    1
  gptid/6e0fa469-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
  gptid/6e6b6135-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
  16691495089254086186                        OFFLINE      0    0    0  was /dev/dsk/gptid/9ed38462-2924-11e3-8dee-8c89a511e867
  gptid/6f3541bc-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
  gptid/6f91613f-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
 
errors: No known data errors

camcontrol devlist
Code:
<M4-CT064M4SSD2 0309>              at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD30EFRX-68AX9N0 80.00A80>    at scbus1 target 0 lun 0 (pass1,ada1)
<WDC WD30EFRX-68AX9N0 80.00A80>    at scbus2 target 0 lun 0 (pass2,ada2)
<WDC WD30EFRX-68AX9N0 80.00A80>    at scbus3 target 0 lun 0 (pass3,ada3)
<WDC WD30EFRX-68AX9N0 80.00A80>    at scbus4 target 0 lun 0 (pass4,ada4)
<WDC WD30EFRX-68AX9N0 80.00A80>    at scbus5 target 0 lun 0 (pass5,ada5)
<WDC WD30 EZRX-00DC0B0 80.0>      at scbus6 target 0 lun 0 (da0,pass6)
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
Ok, I also need the output of 'glabel status'. If the old disk is still there, I think you can just online it with 'zpool online zfsroot gptid/9ed38462-2924-11e3-8dee-8c89a511e867' and then try to replace it again.
 

sandreas

Dabbler
Joined
Apr 20, 2012
Messages
23
Here it comes :) thank you for the quick help...
Code:
glabel status
                                      Name  Status  Components
                            ufs/FreeNASs3    N/A  ada0s3
                            ufs/FreeNASs4    N/A  ada0s4
                    ufsid/5144fed8f696ca23    N/A  ada0s1a
                            ufs/FreeNASs1a    N/A  ada0s1a
                            ufs/FreeNASs2a    N/A  ada0s2a
gptid/6e0fa469-b8a2-11e2-8604-8c89a511e867    N/A  ada1p2
gptid/6e6b6135-b8a2-11e2-8604-8c89a511e867    N/A  ada2p2
gptid/9ed38462-2924-11e3-8dee-8c89a511e867    N/A  ada3p2
gptid/6f3541bc-b8a2-11e2-8604-8c89a511e867    N/A  ada4p2
gptid/6f91613f-b8a2-11e2-8604-8c89a511e867    N/A  ada5p2
gptid/a7f0110c-2cd2-11e3-9a19-000c290d31b5    N/A  da0p1
                                ufs/Loewe    N/A  da0p2
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
Looks good.
Try
Code:
zpool online zfsroot gptid/9ed38462-2924-11e3-8dee-8c89a511e867


The pool should be back. Then try to replace it again - if it fails again from the web GUI we need to figure out whats going wrong. You might have to scrub it before replacing, but it should tell you.
 

sandreas

Dabbler
Joined
Apr 20, 2012
Messages
23
Code:
[root@nas] ~# zpool online zfsroot gptid/9ed38462-2924-11e3-8dee-8c89a511e867
cannot online gptid/9ed38462-2924-11e3-8dee-8c89a511e867: no such device in pool
[root@nas] ~# zpool online zfsroot 16691495089254086186
warning: device '16691495089254086186' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present
[root@nas] ~#


Code:
zpool status
  pool: zfsroot
state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
    the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
  see: http://www.sun.com/msg/ZFS-8000-2Q
  scan: none requested
config:
 
    NAME                                            STATE    READ WRITE CKSUM
    zfsroot                                        DEGRADED    0    0    0
      raidz1-0                                      DEGRADED    0    0    1
        gptid/6e0fa469-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
        gptid/6e6b6135-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
        16691495089254086186                        UNAVAIL      0    0    0  was /dev/dsk/gptid/9ed38462-2924-11e3-8dee-8c89a511e867
        gptid/6f3541bc-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
        gptid/6f91613f-b8a2-11e2-8604-8c89a511e867  ONLINE      0    0    0
 
errors: No known data errors
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
Can you post the output of zpool status again? If we can't online the old drive, we might just try to replace the missing disk from the CLI.

Edit: OK, I'm not sure why it says '/dev/dsk/gptid/9ed38462-2924-11e3-8dee-8c89a511e867' instead of just /gptid..., but try the online command with this id instead (add the /dev/dsk at the beginning).
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
I just re-read your post about replacing the disk. Why are you replacing the disk with itself?
Maybe the replacement process already deleted the partitions or anything on the disk. In this case I'd wait for an answer from somebody with more experience since I don't have a machine or VM to test my advice first.
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
You can also check smart with FreeNAS.
Code:
smartctl -a /dev/adaX
smartctl -t long /dev/adaX

First line to read smart information from a disk, second one to start a long smart test. Or even better, schedule smart tests via Web GUI.

Back to your problem:
Either the disk is OK and has not been modified yet, then you should be able to online it again and restore the pool to a healthy state. Or something has been written to it (i.e. it's ZFS information are destroyed), in which case I'd wait for another person to help you since I can't test any of the commands myself and I have never done such a recovery.
 

sandreas

Dabbler
Joined
Apr 20, 2012
Messages
23
What about this, with an -f this could work:

Code:
zpool replace zfsroot 16691495089254086186 gptid/9ed38462-2924-11e3-8dee-8c89a511e867
invalid vdev specification
use '-f' to override the following errors:
/dev/gptid/9ed38462-2924-11e3-8dee-8c89a511e867 is part of active pool 'zfsroot'


smartctl output:
Code:
smartctl -a /dev/ada3

smartctl 5.43 2012-06-30 r3573 [FreeBSD 8.3-RELEASE-p7 amd64] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net
 
=== START OF INFORMATION SECTION ===
Device Model:    WDC WD30EFRX-68AX9N0
Serial Number:    WD-WCC1T0620237
LU WWN Device Id: 5 0014ee 2083b2c97
Firmware Version: 80.00A80
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Sizes:    512 bytes logical, 4096 bytes physical
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:  8
ATA Standard is:  ACS-2 (revision not indicated)
Local Time is:    Sat Oct  5 23:44:35 2013 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
 
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
 
General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (  0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (40020) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (  2) minutes.
Extended self-test routine
recommended polling time:        ( 401) minutes.
Conveyance self-test routine
recommended polling time:        (  5) minutes.
SCT capabilities:              (0x70bd) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.
 
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate    0x002f  200  200  051    Pre-fail  Always      -      0
  3 Spin_Up_Time            0x0027  182  174  021    Pre-fail  Always      -      5875
  4 Start_Stop_Count        0x0032  100  100  000    Old_age  Always      -      72
  5 Reallocated_Sector_Ct  0x0033  200  200  140    Pre-fail  Always      -      0
  7 Seek_Error_Rate        0x002e  200  200  000    Old_age  Always      -      0
  9 Power_On_Hours          0x0032  100  100  000    Old_age  Always      -      172
10 Spin_Retry_Count        0x0032  100  253  000    Old_age  Always      -      0
11 Calibration_Retry_Count 0x0032  100  253  000    Old_age  Always      -      0
12 Power_Cycle_Count      0x0032  100  100  000    Old_age  Always      -      72
192 Power-Off_Retract_Count 0x0032  200  200  000    Old_age  Always      -      15
193 Load_Cycle_Count        0x0032  200  200  000    Old_age  Always      -      56
194 Temperature_Celsius    0x0022  109  106  000    Old_age  Always      -      41
196 Reallocated_Event_Count 0x0032  200  200  000    Old_age  Always      -      0
197 Current_Pending_Sector  0x0032  200  200  000    Old_age  Always      -      0
198 Offline_Uncorrectable  0x0030  100  253  000    Old_age  Offline      -      0
199 UDMA_CRC_Error_Count    0x0032  200  200  000    Old_age  Always      -      0
200 Multi_Zone_Error_Rate  0x0008  100  253  000    Old_age  Offline      -      0
 
SMART Error Log Version: 1
ATA Error Count: 5
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
 
Error 5 occurred at disk power-on lifetime: 160 hours (6 days + 16 hours)
  When the command that caused the error occurred, the device was active or idle.
 
  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 61 01 80 00 00 40  Device Fault; Error: ABRT 1 sectors at LBA = 0x00000080 = 128
 
  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC  Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  c8 00 01 80 00 00 40 08      00:18:34.573  READ DMA
  c8 00 01 80 00 00 40 08      00:18:34.573  READ DMA
  c8 00 01 80 00 00 40 08      00:18:34.573  READ DMA
  c8 00 01 80 00 00 40 08      00:18:34.573  READ DMA
  c8 00 01 80 00 00 40 08      00:18:34.572  READ DMA
 
Error 4 occurred at disk power-on lifetime: 160 hours (6 days + 16 hours)
  When the command that caused the error occurred, the device was active or idle.
 
  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 61 01 80 00 00 40  Device Fault; Error: ABRT 1 sectors at LBA = 0x00000080 = 128
 
  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC  Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  c8 00 01 80 00 00 40 08      00:18:34.573  READ DMA
  c8 00 01 80 00 00 40 08      00:18:34.573  READ DMA


To compare, here the output of ada1:

Code:
smartctl -a /dev/ada1
smartctl 5.43 2012-06-30 r3573 [FreeBSD 8.3-RELEASE-p7 amd64] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net
 
=== START OF INFORMATION SECTION ===
Device Model:    WDC WD30EFRX-68AX9N0
Serial Number:    WD-WCC1T0653755
LU WWN Device Id: 5 0014ee 25d90470d
Firmware Version: 80.00A80
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Sizes:    512 bytes logical, 4096 bytes physical
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:  8
ATA Standard is:  ACS-2 (revision not indicated)
Local Time is:    Sat Oct  5 23:52:29 2013 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
 
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
 
General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (  0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (40080) seconds.
Offline data collection
 

sandreas

Dabbler
Joined
Apr 20, 2012
Messages
23
Mmh, seems to be an error on ada3 though...
Code:
Error 5 occurred at disk power-on lifetime: 160 hours (6 days + 16 hours)
  When the command that caused the error occurred, the device was active or idle.
 
04 61 01 80 00 00 40  Device Fault; Error: ABRT 1 sectors at LBA = 0x00000080 = 128
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
The -f could work, but I'd generally stay away from the -f parameters if you don't know what they do. Better wait for somebody else to comment on this.

The smart attributes look fine, but you got some weird smart errors. I don't now where they come from, but I would thoroughly test the disk before keeping it. Especially since the disk seems to be rather new.

EDIT: I'd also do a short and long smart test at some point of time. (smartctl -t short /dev/ada3 and smartctl -t long /dev/ada3). Keep in mind that the long test takes 400 minutes, roughly 6.7 hours before reporting anything. Maybe first get the pool sorted out if there is important data on it.
 
Status
Not open for further replies.
Top