ZFS refuses to autoexpand due to incomplete usage of the disk

YJf2ivf5Zd

Cadet
Joined
Feb 17, 2019
Messages
5
Hi,

I'm new with this forum and have a question, which I could not solve by Googling... I'll try to keep the post as brief as possible. Please call back, if I omitted essential information... Thanks.

The scenario:
FreeNAS is running as an VM hosted by VMware ESXi.
I have a Raid-Z2 pool (amongst others) including 7 discs, which I enlarged over the last months from 8 to 10 TB.
Within the same time frame I upgraded to 11.1-U7 and later on to 11.2-U1 (this means, I cannot tell with which version the mess kicked in; the pool is still on 11.1 feature level).

Now, after replacing the 7th disk, I expected the pool to autoexpand. But it did not occur.
root@as002aa:~ # zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Ablage 50.8T 42.2T 8.53T - - 16% 83% 1.00x ONLINE /mnt
...
root@as002aa:~ # zpool status Ablage
pool: Ablage
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: resilvered 6.03T in 2 days 01:56:33 with 0 errors on Sun Feb 17 04:34:13 2019
config:
NAME STATE READ WRITE CKSUM
Ablage ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/5b0aef2a-2ba7-11e8-ae62-000c291575e9 ONLINE 0 0 0
gptid/c6088214-9cec-11e8-a025-000c291575e9 ONLINE 0 0 0
gptid/eb0db63a-30c1-11e9-97c7-000c291575e9 ONLINE 0 0 0
gptid/29a27cf3-9e69-11e8-a025-000c291575e9 ONLINE 0 0 0
gptid/5d27183d-a225-11e8-a025-000c291575e9 ONLINE 0 0 0
gptid/532e6d79-28a0-11e8-a223-000c291575e9 ONLINE 0 0 0
gptid/383f5d01-a398-11e8-a025-000c291575e9 ONLINE 0 0 0
After doing tons of research I narrowed the problem down to the fact, that two of the seven disks are not correctly partitioned:
root@as002aa:~ # gpart show
...
=> 40 19524485040 da1 GPT (9.1T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 15619587928 2 freebsd-zfs (7.3T)
15623782360 3900702720 - free - (1.8T)


=> 40 19524485040 da2 GPT (9.1T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 19520290640 2 freebsd-zfs (9.1T)
19524485072 8 - free - (4.0K)
...
This might be happened because I enlarged the underlying virtual disks and repaired the gpt of the disks after they were already attached to the zpool: I used gpart recover dax to repair the invalid partition tables...

And now the question: Is there a possibility to force ZFS to use the complete disks?
The other way round: How can ZFS be forced to reintegrate a healthy disk without erasing and resilvering it?


TIA for any suggestions.
 
Last edited:

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi YJf2,

You said that your pool is a 7 disks RaidZ2 and that by now, all drives have been replaced by 10TB drives.

Because drives are always smaller than their names say, the zpool list output saying about 50.8T is indeed compatible with the 7x 8TB.

Are these drives physical or virtual ? For virtual, are they thick or thin provisioned ? For physical, are they all the same ?

Do you have only a single drive that is not used completely as you exposed or is that the case for all your drives ?

With more information, we may help you more than that.

Hope this will give you few ideas,
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
This might be happened because I enlarged the underlying virtual disks and repaired the gpt of the disks after they were already attached to the zpool: I used gpart recover dax to repair the invalid partition tables...
You are not supposed to use virtual disks with FreeNAS. That is what broke the functionality.
Have a look at this:

"Absolutely must virtualize FreeNAS!" ... a guide to not completely losing your data.
https://forums.freenas.org/index.ph...ide-to-not-completely-losing-your-data.12714/
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
The autoexpand flag should have been set to on by default. You can check it by typing zpool get autoexpand in a terminal.
Here is what that should show you, for each pool:
Code:
NAME          PROPERTY    VALUE   SOURCE
Backup        autoexpand  on      local
Emily         autoexpand  on      local
Irene         autoexpand  on      local
freenas-boot  autoexpand  off     default
You can try doing this, zpool set autoexpand=off Ablage then, once that completes, which should be instant,
then turn around and reverse that with zpool set autoexpand=on Ablage.

This may cause ZFS to expand automatically, but it likely will not do so.
The reason being, when FreeNAS does a disk replace, and has direct access to the drive, it fills the entire drive, after the swap space, with the ZFS partition. Because you were presenting virtual disks, FreeNAS could only see part of the physical disk and only partitioned that part of the disk. Your subsequent resize of the virtual disk did not cause FreeNAS to change that partition size. Now you have a partition that does not fill the disk and you are constrained by that partition size.
The best thing to do, is copy all your data off the system and rebuild the system properly, providing FreeNAS direct access to the disks by putting the data disks on a controller where the entire controller can be passed into the FreeNAS virtual machine allowing FreeNAS direct control over the disks.
If you can't do that, you may be able to reach your goal by removing and replacing each disk in turn, doing another rebuild of each disk. This should re-partition the drives with the correct partition size and allow the pool to autoexpand.
 

YJf2ivf5Zd

Cadet
Joined
Feb 17, 2019
Messages
5
Thanks for your reply,
Are these drives physical or virtual ?
Virtual, I am brave...
For virtual, are they thick or thin provisioned ?
... but not crazy ;) - Thick
For physical, are they all the same ?
No, that's why they are virtual.
Do you have only a single drive that is not used completely as you exposed or is that the case for all your drives ?
Two out of seven don't use the full space.
 
Last edited:

YJf2ivf5Zd

Cadet
Joined
Feb 17, 2019
Messages
5
Thanks for your reply,
The autoexpand flag should have been set to on by default. You can check it by typing zpool get autoexpand in a terminal.
Here is what that should show you, for each pool:
Code:
NAME          PROPERTY    VALUE   SOURCE
Backup        autoexpand  on      local
Emily         autoexpand  on      local
Irene         autoexpand  on      local
freenas-boot  autoexpand  off     default
You can try doing this, zpool set autoexpand=off Ablage then, once that completes, which should be instant,
then turn around and reverse that with zpool set autoexpand=on Ablage.
Already tried that, but the question is not about autoexpand - not any more - but about partitioning, or about assigning disks to a pool...
... you may be able to reach your goal by removing and replacing each disk in turn, doing another rebuild of each disk. This should re-partition the drives with the correct partition size and allow the pool to autoexpand.
This should work, for sure. I just wanted to learn a little bit more about the partitioning beneath, and why this went wrong...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
This should work, for sure. I just wanted to learn a little bit more about the partitioning beneath, and why this went wrong...
I am not sure how you handled this. I can only guess. The amount of the physical disk exposed to FreeNAS needed to be changed before the disk was added back to the system.
The virtual disk needed to be removed from the VM, reconfigured with the new capacity, then added back. That should have caused FreeNAS to partition the full capacity of the virtual disk.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi again YJf2,

I would definitely go with Chris's explanation on this one : autoexpand is considered by FreeNAS at the moment it starts using a new drive. Here, FreeNAS "did not noticed" that the drive was extended because it is absolutely impossible for an existing physical drive to increase in size. Remember that FreeNAS is designed for running on the metal and as such, does not consider flexibility offered by virtualization like vmotion, variable disk size, etc. FreeNAS assumes it is on metal.

For FreeNAS to notice that the drive is now bigger, it must be re-added to the pool. For that, it must be removed first. If you have enough storage, create yourself a new 10TB drive, add it to your FreeNAS, replace one of the existing drive that is not partitioned 100% and let re-silvering run. Once done, remove that virtual drive, delete it and re-do the process until all of your virtual drives are inserted full size in FreeNAS.

If you do not have enough free space for that, you will have to rely on your redundancy : degrade your pool by taking one of your virtual drive offline. Delete that virtual drive and create a new one for 10TB. Add this one in FreeNAS and re-silver. Re-do the same until all your drives are replaced and FreeNAS detects the opportunity to auto-expand.

Good luck
 
Top