YJf2ivf5Zd
Cadet
- Joined
- Feb 17, 2019
- Messages
- 5
Hi,
I'm new with this forum and have a question, which I could not solve by Googling... I'll try to keep the post as brief as possible. Please call back, if I omitted essential information... Thanks.
The scenario:
FreeNAS is running as an VM hosted by VMware ESXi.
I have a Raid-Z2 pool (amongst others) including 7 discs, which I enlarged over the last months from 8 to 10 TB.
Within the same time frame I upgraded to 11.1-U7 and later on to 11.2-U1 (this means, I cannot tell with which version the mess kicked in; the pool is still on 11.1 feature level).
Now, after replacing the 7th disk, I expected the pool to autoexpand. But it did not occur.
After doing tons of research I narrowed the problem down to the fact, that two of the seven disks are not correctly partitioned:
This might be happened because I enlarged the underlying virtual disks and repaired the gpt of the disks after they were already attached to the zpool: I used gpart recover dax to repair the invalid partition tables...
And now the question: Is there a possibility to force ZFS to use the complete disks?
The other way round: How can ZFS be forced to reintegrate a healthy disk without erasing and resilvering it?
TIA for any suggestions.
I'm new with this forum and have a question, which I could not solve by Googling... I'll try to keep the post as brief as possible. Please call back, if I omitted essential information... Thanks.
The scenario:
FreeNAS is running as an VM hosted by VMware ESXi.
I have a Raid-Z2 pool (amongst others) including 7 discs, which I enlarged over the last months from 8 to 10 TB.
Within the same time frame I upgraded to 11.1-U7 and later on to 11.2-U1 (this means, I cannot tell with which version the mess kicked in; the pool is still on 11.1 feature level).
Now, after replacing the 7th disk, I expected the pool to autoexpand. But it did not occur.
root@as002aa:~ # zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Ablage 50.8T 42.2T 8.53T - - 16% 83% 1.00x ONLINE /mnt
...
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Ablage 50.8T 42.2T 8.53T - - 16% 83% 1.00x ONLINE /mnt
...
root@as002aa:~ # zpool status Ablage
pool: Ablage
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: resilvered 6.03T in 2 days 01:56:33 with 0 errors on Sun Feb 17 04:34:13 2019
config:
NAME STATE READ WRITE CKSUM
Ablage ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/5b0aef2a-2ba7-11e8-ae62-000c291575e9 ONLINE 0 0 0
gptid/c6088214-9cec-11e8-a025-000c291575e9 ONLINE 0 0 0
gptid/eb0db63a-30c1-11e9-97c7-000c291575e9 ONLINE 0 0 0
gptid/29a27cf3-9e69-11e8-a025-000c291575e9 ONLINE 0 0 0
gptid/5d27183d-a225-11e8-a025-000c291575e9 ONLINE 0 0 0
gptid/532e6d79-28a0-11e8-a223-000c291575e9 ONLINE 0 0 0
gptid/383f5d01-a398-11e8-a025-000c291575e9 ONLINE 0 0 0
pool: Ablage
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: resilvered 6.03T in 2 days 01:56:33 with 0 errors on Sun Feb 17 04:34:13 2019
config:
NAME STATE READ WRITE CKSUM
Ablage ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/5b0aef2a-2ba7-11e8-ae62-000c291575e9 ONLINE 0 0 0
gptid/c6088214-9cec-11e8-a025-000c291575e9 ONLINE 0 0 0
gptid/eb0db63a-30c1-11e9-97c7-000c291575e9 ONLINE 0 0 0
gptid/29a27cf3-9e69-11e8-a025-000c291575e9 ONLINE 0 0 0
gptid/5d27183d-a225-11e8-a025-000c291575e9 ONLINE 0 0 0
gptid/532e6d79-28a0-11e8-a223-000c291575e9 ONLINE 0 0 0
gptid/383f5d01-a398-11e8-a025-000c291575e9 ONLINE 0 0 0
root@as002aa:~ # gpart show
...
=> 40 19524485040 da1 GPT (9.1T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 15619587928 2 freebsd-zfs (7.3T)
15623782360 3900702720 - free - (1.8T)
=> 40 19524485040 da2 GPT (9.1T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 19520290640 2 freebsd-zfs (9.1T)
19524485072 8 - free - (4.0K)
...
...
=> 40 19524485040 da1 GPT (9.1T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 15619587928 2 freebsd-zfs (7.3T)
15623782360 3900702720 - free - (1.8T)
=> 40 19524485040 da2 GPT (9.1T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 19520290640 2 freebsd-zfs (9.1T)
19524485072 8 - free - (4.0K)
...
And now the question: Is there a possibility to force ZFS to use the complete disks?
The other way round: How can ZFS be forced to reintegrate a healthy disk without erasing and resilvering it?
TIA for any suggestions.
Last edited: