Pool I/O is currently suspended

djdwosk97

Patron
Joined
Jun 12, 2015
Messages
382
There appears to be a problem with one of my pools (consisting of a single drive).

Code:
zpool status -v

Code:
pool: WD1Blue2
state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://illumos.org/msg/ZFS-8000-JQ
  scan: scrub repaired 0 in 0 days 01:27:36 with 0 errors on Sun Sep  6 01:27:36 2020
config:

    NAME                    STATE     READ WRITE CKSUM
    WD1Blue2                UNAVAIL      0     0     0
      10487680230527110918  REMOVED      0     0     0  was /dev/gptid/cdad1082-e98a-11e7-a88a-002590d85eb3

errors: Permanent errors have been detected in the following files:

        <metadata>:<0x0>
        <metadata>:<0x1b>
        WD1Blue2/Data:<0x4>


I've tried a few different options I've come across, but I haven't been able to get passed this issue. Ideally, I would be able to clear any issues and actively use the pool (specifically pulling the data off of it), but acceptably I would be to be able to export/unmount it.

I previously had my jail root located on this mount and moved it to a different pool, but it looks like my jail root defaulted back to this pool and I can't change it to the new pool because of the I/O issue, thus I can't use any of my jails.

Code:
root@freenas:~ # zfs unmount WD1Blue2
cannot open 'WD1Blue2': pool I/O is currently suspended


Code:
root@freenas:~ # zpool export WD1Blue2
--Hangs indefinitely--


Code:
root@freenas:~ # zpool get failmode WD1Blue2
NAME      PROPERTY  VALUE     SOURCE
WD1Blue2  failmode  continue  local


Code:
root@freenas:~ # zpool clear -nFX WD1Blue2
root@freenas:~ # zpool reopen WD1Blue2
cannot reopen 'WD1Blue2': pool I/O is currently suspended


I also noticed that the ls I ran earlier is still running, but I can't seem to kill it (kill -9 24430 doesn't do anything), and I see two export commands that I don't know if I should attempt to interrupt.
Code:
root@freenas:~ # ps
  PID TT  STAT    TIME COMMAND
24430  2- D+   0:00.00 ls /mnt/WD1Blue2/Data/Dir0/
1806 v0  Is+  0:00.90 /usr/local/bin/python /etc/netcli (python3.7)
1807 v1  Is+  0:00.00 /usr/libexec/getty Pc ttyv1
1808 v2  Is+  0:00.00 /usr/libexec/getty Pc ttyv2
1809 v3  Is+  0:00.00 /usr/libexec/getty Pc ttyv3
1810 v4  Is+  0:00.00 /usr/libexec/getty Pc ttyv4
1811 v5  Is+  0:00.00 /usr/libexec/getty Pc ttyv5
1812 v6  Is+  0:00.00 /usr/libexec/getty Pc ttyv6
1813 v7  Is+  0:00.00 /usr/libexec/getty Pc ttyv7
29813  6  Is   0:00.02 login [pam] (login)
29814  6  I+   0:00.03 -csh (csh)
29848  7  Is   0:00.02 -csh (csh)
29973  7  D+   0:00.00 zpool export WD1Blue2
30004  9  Is   0:00.03 -csh (csh)
30051  9  D+   0:00.00 zpool export -f WD1Blue2
30094 10  Ss   0:00.02 -csh (csh)
30108 10  R+   0:00.00 ps

root@freenas:~ # lsof -p 24430
lsof: WARNING: compiled for FreeBSD release 11.0-RELEASE; this is 11.3-RELEASE-p11.
COMMAND   PID USER   FD   TYPE         DEVICE SIZE/OFF   NODE NAME
ls      24430 root  cwd   VDIR 158,3657433109       18   2276 /root
ls      24430 root  rtd   VDIR 158,3657433109       28      4 /
ls      24430 root  txt   VREG 158,3657433109    33824  25988 /bin/ls
ls      24430 root  txt   VREG 158,3657433109   140184  31261 /libexec/ld-elf.so.1
ls      24430 root  txt   VREG 158,3657433109    79560 181631 /usr/share/locale/en_US.UTF-8/LC_COLLATE
ls      24430 root  txt   VREG 158,3657433109   124136  31251 /lib/libxo.so.0
ls      24430 root  txt   VREG 158,3657433109    74896  31247 /lib/libutil.so.9
ls      24430 root  txt   VREG 158,3657433109   386048  31225 /lib/libncursesw.so.8
ls      24430 root  txt   VREG 158,3657433109  1675872  31178 /lib/libc.so.7
ls      24430 root    0u  VBAD                                (revoked)
ls      24430 root    1u  VBAD                                (revoked)
ls      24430 root    2u  VBAD                                (revoked)
ls      24430 root    3r  VDIR 158,3657433109       18   2276 /root
 
Last edited:

eleson

Dabbler
Joined
Jul 9, 2020
Messages
18
Edit: I am running Truenas 12 rc so this post may be wrongly placed.


Same-ish problem here:
Code:
  pool: data
state: ONLINE
status: One or more devices are faulted in response to IO failures.

scan: scrub repaired 0B with 0 errors  on Sun Sep 20 xxxx.
NAME              STATE      READ Write CKSUM
data              ONLINE        0     0     0
    mirror-0      ONLINE        0     0     0
       gptid/xxx  ONLINE        0     0     4
       gptid/yyy  ONLINE        0     0     4

errors: List of errors unavailable: pool I/O is currently suspended.


zpool clear
hangs.

cd/var/log
almost always hangs .

Boot in safe mode works fine.

I was scared of physical sata failures, so all cables area replaced.

UI doesn't function start
When entering the ip address i can hear a harddisk spin up,
and that have to be one of the disk in the mirror, the boot-disk is an ssd.

After 3 evenings of googling I am out of ideas on where to move forward.
Any pointers to my options on where to move from here?
 
Last edited:
Top