Best way to fix a pool with vdevs that have different numbers of drives?

nigon

Cadet
Joined
Nov 20, 2023
Messages
6
So, when I first set up my TrueNAS Scale server, I didn't really know what I was doing, and things got a little lopsided, and now I have a zpool with two vdevs with different numbers of disks:
NAME STATE READ WRITE CKSUM plex ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 4ce1385e-474e-4390-aaa3-9450082761cf ONLINE 0 0 0 870395e7-70fb-4716-a80f-db57c5488c5f ONLINE 0 0 0 bff8f1ee-2e6b-43e9-a97b-d7be836299ba ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 e123afb5-e233-4c71-b2c3-42e303a35f82 ONLINE 0 0 0 c3a185cf-5b75-45e9-acc7-23896836ca2a ONLINE 0 0 0 52a98154-81eb-4530-aceb-efacec3dac16 ONLINE 0 0 0 38d4a2a4-efe1-4f80-9a07-252332bbf64f ONLINE 0 0 0 e5e46d79-b7ad-48a1-9b2f-db154d41254f ONLINE 0 0 0

For reference, those are all 2TB SSDs.

Apparently this isn't really optimal, and it results in an annoying warning indicator in the GUI vdev status. My hope is that there's a way to fix this that doesn't require me to blow away the entire pool config, dataset config, the attached K8s apps, etc. I couldn't really find anything online, and there isn't really anything in the web GUI that hints at me being allowed to alter the underlying vdevs for the pool. However, if there was a way for me to do this with minimal pain, I assume the workflow would be something like 1. rsync everything to my backup NAS, 2. bring down the zpool 3. Destroy and re-create the vdevs 4. copy the data back.

I assume the most optimal arrangement would be two raidz1 vdevs of 4 SSDs each. The plex label should give you an idea of the I/O patterns, if that matters. Lots of large files, occasional writing, frequent reading. The Plex is running via a K8s ap, with two datasets mounted into the container.

So, here's my questions:

- Is there a (relatively) easy way to do this that doesn't involve me basically deleting the entire pool, all the datasets, and rebuilding everything?
- Is it worth the effort?
- Assuming the answer to the above is "yes" is the planned arrangement of two 4-disk RAIDZ1 data Vdevs the way to go?

Other system info, if it matters:

OS Version:TrueNAS-SCALE-22.12.3.3
Product:Super Server
Model:AMD EPYC 7302 16-Core Processor
Memory:63 GiB
storage dataset query id,type,used,available,usedbychildren,pool +----------------------+------------+------+--------+----------------+-----------+ | id | type | pool | used | usedbychildren | available | +----------------------+------------+------+--------+----------------+-----------+ | plex | FILESYSTEM | plex | <dict> | <dict> | <dict> | | plex/plex-config | FILESYSTEM | plex | <dict> | <dict> | <dict> | | plex/plex-media | FILESYSTEM | plex | <dict> | <dict> | <dict> | | plex/ix-applications | FILESYSTEM | plex | <dict> | <dict> | <dict> | +----------------------+------------+------+--------+----------------+-----------+
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
- Is there a (relatively) easy way to do this that doesn't involve me basically deleting the entire pool, all the datasets, and rebuilding everything?
No.
(If you wait long enough for vdev expansion to eventually land in production, some time in 2024-25, there will be a way to widen the first vdev to 4 or 5 drives. There's no way, not even a long and fiendishly difficult way, to reduce the second vdev from 5 to 4 drives.)
- Is it worth the effort?
I'd say yes. Especially considering that the whole data can be backed up to a single large HDD—or a mirrored pair, to keep redundancy along, but you have a backup anyway, haven't you?
- Assuming the answer to the above is "yes" is the planned arrangement of two 4-disk RAIDZ1 data Vdevs the way to go?
This. Depends. On. What. You. Want. To. Achieve.
So take the time to think about your requirements and about pool layout. With raidz#, you get one and only one chance to get it right: At pool creation.
With SSDs, raidz1 is arguably safe enough. A single 8-wide raidz2 would be safer tough, but future expansion would be in groups of 8 drives rather than in groups of 4.
Two vdevs gives twice the IOPS of a single vdev, but with SSDs (possibly even NVMe, given the system…) a single vdev probably provides enough IOPS already.
 

nigon

Cadet
Joined
Nov 20, 2023
Messages
6
No.
(If you wait long enough for vdev expansion to eventually land in production, some time in 2024-25, there will be a way to widen the first vdev to 4 or 5 drives. There's no way, not even a long and fiendishly difficult way, to reduce the second vdev from 5 to 4 drives.)
So, I take it my only way to accomplish this is to back everything up, blow away the entire pool, and start again? Should I just rsync the individual datasets and re-create everything manually, or is there some backup/snapshot tooling I should be using here?

I'd say yes. Especially considering that the whole data can be backed up to a single large HDD—or a mirrored pair, to keep redundancy along, but you have a backup anyway, haven't you?
Yah, there's a second Synology NAS that I use as a backup.

With SSDs, raidz1 is arguably safe enough. A single 8-wide raidz2 would be safer tough, but future expansion would be in groups of 8 drives rather than in groups of 4.
My primary reason for the 4-drive vdev was so I didn't have to buy large numbers of drives at a time. Plan is to add another 8 2TB SSDs sometime down the line, spreading out the purchases in two parts. I figure RAIDZ1 with SSDs was good enough for reliability. The other option would be RAIDZ2 with 4-drive vdevs, unless there's some reason why I shouldn't do that
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
So, I take it my only way to accomplish this is to back everything up, blow away the entire pool, and start again?
Today, yes.

Should I just rsync the individual datasets and re-create everything manually, or is there some backup/snapshot tooling I should be using here?
You can use ZFS Replication to obtain a mirror structure on your backup system... IF it's running ZFS (Synology doesn't iirc).

I figure RAIDZ1 with SSDs was good enough for reliability.
Kind of yes, the main issue is that they tend to die suddenly without giving you warnings. I would probably be comfortable with 2x VDEVs of 4x SSDs in RAIDZ1; not so much with a single 8-wide RAIDZ1 VDEV. Also depends on the SSDs you buy.

Suggested reading:

Lastly I am curious about how did you end up with differently sizes VDEVs. :smile:

P.S.: please use [CODE][/CODE] for multiple lines of code instead.
 
Last edited:

nigon

Cadet
Joined
Nov 20, 2023
Messages
6
You can use ZFS Replication to obtain a mirror structure on your backup system... IF it's running ZFS (Synology doesn't iirc).
Yah. The backup NAS is Synology; however, assembling a TrueNAS server from parts as a sort of "intermediary" isn't out of the question. Looking at the the docs for creating a replication task, I can use a TrueNAS scale/core server, but it doesn't mention if the storage topology needs to be the same. Not sure if I need a similar arrangement of vdevs, or if anything will work, as long as the zpool for the temp server is large enough.

Lastly I am curious about how did you end up with differently sizes VDEVs
Package with the 5 SSDs got delayed, I already had 3, I got impatient, and didn't realize it made a difference, lmao
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Not sure if I need a similar arrangement of vdevs, or if anything will work, as long as the zpool for the temp server is large enough.
Anything works in that regard.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
You should take the time to learn about replication. For the most part, a lot better than good ol rsync. But if the destination isn't ZFS, obviously a moot point. I use replication for offsite backups in case of fire, theft, etc.

Where is your application pool?
 

nigon

Cadet
Joined
Nov 20, 2023
Messages
6
Where is your application pool?
So, the plex zpool I showed earlier is the one and only on the system, so I assume that one. The GUI also says the datasets are currently holding container config. Unless you're asking for something more specific?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
So, the plex zpool I showed earlier is the one and only on the system, so I assume that one. The GUI also says the datasets are currently holding container config. Unless you're asking for something more specific?
The first time you open the apps tab you are asked where to create it. It's the only pool on the system, got it.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
If you have no plans for expansion, z2 the way to go if you want very high safety.
 
Top