Pool import failed with "permission denied"

Xevus

Dabbler
Joined
Nov 22, 2016
Messages
12
Hi.

One of the drives in my pool failed, after shutdown I've disconnected this drive, but then TrueNAS wasn't able to boot (it hanged when console log said that the pool was suspended) What worked in the end, is to disconnect all the drives and then export/disconnect the pool. However, I now have a new problem where I cannot import pool, the operations failes with "permission denied"

Traceback (most recent call last):
File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 352, in import_pool
self.logger.error(
File "libzfs.pyx", line 392, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 346, in import_pool
zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
File "libzfs.pyx", line 1151, in libzfs.ZFS.import_pool
File "libzfs.pyx", line 1179, in libzfs.ZFS.__import_pool
libzfs.ZFSException: permission denied

This is an issue very similar to what is described here - https://www.truenas.com/community/threads/cannot-import-pool-permission-denied.92881/

It seems that the root cause is that drive identifiers has changed and importing the pool tries to import using old disk id that is now used by another drive. The workaround in this thread was to modify the pool from another OS. Surely this is not intended behavior and there should be a way to handle this via TrueNAS ?
 

Xevus

Dabbler
Joined
Nov 22, 2016
Messages
12
Ok, it is actually even worse than I thought :( On Ubuntu zpool import hangs waiting for I/O. There is call trace in syslog, not sure how usefull it is.

I need a way to remove a disk from a pool without importing it.

Dec 21 23:32:26 ubuntu kernel: [ 1088.584740] task:zpool state:D stack: 0 pid: 7181 ppid: 7180 flags:0x00004002
Dec 21 23:32:26 ubuntu kernel: [ 1088.584751] Call Trace:
Dec 21 23:32:26 ubuntu kernel: [ 1088.584756] <TASK>
Dec 21 23:32:26 ubuntu kernel: [ 1088.584762] __schedule+0x23d/0x590
Dec 21 23:32:26 ubuntu kernel: [ 1088.584775] ? autoremove_wake_function+0x12/0
x40
Dec 21 23:32:26 ubuntu kernel: [ 1088.584786] schedule+0x4e/0xb0
Dec 21 23:32:26 ubuntu kernel: [ 1088.584793] io_schedule+0x46/0x70
Dec 21 23:32:26 ubuntu kernel: [ 1088.584802] cv_wait_common+0xab/0x130 [spl]
Dec 21 23:32:26 ubuntu kernel: [ 1088.584828] ? wait_woken+0x70/0x70
Dec 21 23:32:26 ubuntu kernel: [ 1088.584836] __cv_wait_io+0x18/0x20 [spl]
Dec 21 23:32:26 ubuntu kernel: [ 1088.584859] txg_wait_synced_impl+0x9b/0x120 [zfs]
Dec 21 23:32:26 ubuntu kernel: [ 1088.585262] txg_wait_synced+0x10/0x40 [zfs]
Dec 21 23:32:26 ubuntu kernel: [ 1088.585630] spa_load_impl.constprop.0+0x260/0x390 [zfs]
Dec 21 23:32:26 ubuntu kernel: [ 1088.585971] spa_load+0x6d/0x130 [zfs]
Dec 21 23:32:26 ubuntu kernel: [ 1088.586294] spa_load_best+0x57/0x270 [zfs]
Dec 21 23:32:26 ubuntu kernel: [ 1088.586612] ? zpool_get_load_policy+0x18a/0x1a0 [zcommon]
Dec 21 23:32:26 ubuntu kernel: [ 1088.586631] spa_import+0x1e4/0x7d0 [zfs]
Dec 21 23:32:26 ubuntu kernel: [ 1088.586949] ? nvpair_value_common+0x9a/0x160 [znvpair]
Dec 21 23:32:26 ubuntu kernel: [ 1088.586977] zfs_ioc_pool_import+0x146/0x160 [zfs]
Dec 21 23:32:26 ubuntu kernel: [ 1088.587346] zfsdev_ioctl_common+0x682/0x740 [zfs]
Dec 21 23:32:26 ubuntu kernel: [ 1088.587718] ? __check_object_size.part.0+0x4a/0x150
Dec 21 23:32:26 ubuntu kernel: [ 1088.587727] ? _copy_from_user+0x2e/0x60
Dec 21 23:32:26 ubuntu kernel: [ 1088.587736] zfsdev_ioctl+0x57/0xe0 [zfs]
Dec 21 23:32:26 ubuntu kernel: [ 1088.588104] __x64_sys_ioctl+0x91/0xc0
Dec 21 23:32:26 ubuntu kernel: [ 1088.588116] do_syscall_64+0x5c/0xc0
Dec 21 23:32:26 ubuntu kernel: [ 1088.588125] ? __rseq_handle_notify_resume+0x2d/0xb0
Dec 21 23:32:26 ubuntu kernel: [ 1088.588138] ? exit_to_user_mode_loop+0x10d/0x160
Dec 21 23:32:26 ubuntu kernel: [ 1088.588149] ? exit_to_user_mode_prepare+0x37/0xb0
Dec 21 23:32:26 ubuntu kernel: [ 1088.588157] ? syscall_exit_to_user_mode+0x27/0x50
Dec 21 23:32:26 ubuntu kernel: [ 1088.588166] ? do_syscall_64+0x69/0xc0
Dec 21 23:32:26 ubuntu kernel: [ 1088.588172] ? exit_to_user_mode_prepare+0x37/0xb0
Dec 21 23:32:26 ubuntu kernel: [ 1088.588166] ? do_syscall_64+0x69/0xc0
Dec 21 23:32:26 ubuntu kernel: [ 1088.588172] ? exit_to_user_mode_prepare+0x37/0xb0
Dec 21 23:32:26 ubuntu kernel: [ 1088.588180] ? syscall_exit_to_user_mode+0x27/0x50
Dec 21 23:32:26 ubuntu kernel: [ 1088.588187] ? do_syscall_64+0x69/0xc0
Dec 21 23:32:26 ubuntu kernel: [ 1088.588193] ? do_syscall_64+0x69/0xc0
Dec 21 23:32:26 ubuntu kernel: [ 1088.588198] ? do_syscall_64+0x69/0xc0
Dec 21 23:32:26 ubuntu kernel: [ 1088.588204] entry_SYSCALL_64_after_hwframe+0x44/0xae
Dec 21 23:32:26 ubuntu kernel: [ 1088.588216] RIP: 0033:0x7fce35029aff
Dec 21 23:32:26 ubuntu kernel: [ 1088.588223] RSP: 002b:00007fff7dbf3680 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec 21 23:32:26 ubuntu kernel: [ 1088.588231] RAX: ffffffffffffffda RBX: 000055b97f369290 RCX: 00007fce35029aff
Dec 21 23:32:26 ubuntu kernel: [ 1088.588235] RDX: 00007fff7dbf4050 RSI: 0000000000005a02 RDI: 0000000000000003
Dec 21 23:32:26 ubuntu kernel: [ 1088.588239] RBP: 00007fff7dbf7640 R08: 0000000000000000 R09: 000055b97f2a2350
Dec 21 23:32:26 ubuntu kernel: [ 1088.588243] R10: 00007fce35129420 R11: 0000000000000246 R12: 000055b97f288570
 
Top