Can't Delete Pool

ryan0413

Cadet
Joined
May 18, 2022
Messages
8
I am trying to delete a pool consisting of two datasets and a zvol that I was using for iSCSI for several virtual machines but I am receiving the error below when trying to delete.

Screenshot 2023-04-17 210649.png


Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 210, in export zfs.export_pool(pool) File "libzfs.pyx", line 465, in libzfs.ZFS.__exit__ File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 210, in export zfs.export_pool(pool) File "libzfs.pyx", line 1340, in libzfs.ZFS.export_pool libzfs.ZFSException: cannot export 'tank0': pool is busy During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 115, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1322, in nf return func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 212, in export raise CallError(str(e)) middlewared.service_exception.CallError: [EFAULT] cannot export 'tank0': pool is busy """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/job.py", line 426, in run await self.future File "/usr/lib/python3/dist-packages/middlewared/job.py", line 461, in __run_body rv = await self.method(*([self] + args)) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1318, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1186, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 1668, in export await self.middleware.call('zfs.pool.export', pool['name']) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1386, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1343, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1349, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1264, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1249, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) middlewared.service_exception.CallError: [EFAULT] cannot export 'tank0': pool is busy

Screenshot 2023-04-17 210309.png


I can't think of anything using the pool, dataset, or zvol but I know there is activity when I check it with zpool iostat tank0. I even shut down my virtualization server just in case there was still some kind of strange connection to back to TrueNAS.

Screenshot 2023-04-17 211317.png


Also, when I go to the dataset page and look at pool tank0 it tells me the path can't be found. I confirmed this in the shell as well.

Screenshot 2023-04-17 210631.png


Screenshot 2023-04-17 210722.png


Things I have tried:
  • Restarting TrueNAS
  • Made sure iSCSI service is turned off
  • Removed all confirguration from iSCSI except the Target Global Configuration (which I believe is required)
  • Made sure tank0 is not used as a system pool
  • Deleting the pool from the shell with the force tag (received the same error)
Is there anything I'm overlooking? Or something else I need to try?

Thank you!
 

Attachments

  • Screenshot 2023-04-17 210722.png
    Screenshot 2023-04-17 210722.png
    2.5 KB · Views: 113

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It probably is the system dataset. Please supply the output of. preferably in code tags not screen shots;
zfs list -t all -r tank0
Their should be an option somewhere to move the system dataset to another data pool. Or even the boot-pool.
 

ryan0413

Cadet
Joined
May 18, 2022
Messages
8
It probably is the system dataset. Please supply the output of. preferably in code tags not screen shots;
zfs list -t all -r tank0
Their should be an option somewhere to move the system dataset to another data pool. Or even the boot-pool.

I don't think it is the system dataset. I was able to verify this under System Settings --> Advance

Screenshot 2023-04-17 222846.png


Here's the output from zfs list -t all -r tank0

Code:
root@truenas[~]# zfs list -t all -r tank0
NAME                                USED  AVAIL     REFER  MOUNTPOINT
tank0                               711G   180G      200K  /mnt/tank0
tank0/ds5                           711G   180G      200K  /mnt/tank0/ds5
tank0/ds5/vm_iscsi_disks            711G   180G      192K  /mnt/tank0/ds5/vm_iscsi_disks
tank0/ds5/vm_iscsi_disks/vm-disks   711G   677G      214G  -
 

samarium

Contributor
Joined
Apr 8, 2023
Messages
192
if it is in use by path name, you could try renaming and rebooting, so path name would not be valid anymore?
$ sudo zfs rename tank0/da5 tank0/renamed
could also clear the zfs labels on the pool devices and reboot, messy, but you are going to delete the pool anyway
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
tank0/ds5/vm_iscsi_disks 711G 180G 192K /mnt/tank0/ds5/vm_iscsi_disks tank0/ds5/vm_iscsi_disks/vm-disks 711G 677G 214G -
Is the iSCSI service still running/configured?
 

ryan0413

Cadet
Joined
May 18, 2022
Messages
8
Is the iSCSI service still running/configured?
This is what I think it could be but I have deleted all iSCSI shares and all the configuration that I could except the Target Global Configuration (which I believe is required). The iSCSI service is turned off as well but wondering if there’s something else I need to do.
 
Top