So, when I first set up my TrueNAS Scale server, I didn't really know what I was doing, and things got a little lopsided, and now I have a zpool with two vdevs with different numbers of disks:
For reference, those are all 2TB SSDs.
Apparently this isn't really optimal, and it results in an annoying warning indicator in the GUI vdev status. My hope is that there's a way to fix this that doesn't require me to blow away the entire pool config, dataset config, the attached K8s apps, etc. I couldn't really find anything online, and there isn't really anything in the web GUI that hints at me being allowed to alter the underlying vdevs for the pool. However, if there was a way for me to do this with minimal pain, I assume the workflow would be something like 1. rsync everything to my backup NAS, 2. bring down the zpool 3. Destroy and re-create the vdevs 4. copy the data back.
I assume the most optimal arrangement would be two raidz1 vdevs of 4 SSDs each. The
So, here's my questions:
- Is there a (relatively) easy way to do this that doesn't involve me basically deleting the entire pool, all the datasets, and rebuilding everything?
- Is it worth the effort?
- Assuming the answer to the above is "yes" is the planned arrangement of two 4-disk RAIDZ1 data Vdevs the way to go?
Other system info, if it matters:
OS Version:TrueNAS-SCALE-22.12.3.3
Product:Super Server
Model:AMD EPYC 7302 16-Core Processor
Memory:63 GiB
NAME STATE READ WRITE CKSUM
plex ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
4ce1385e-474e-4390-aaa3-9450082761cf ONLINE 0 0 0
870395e7-70fb-4716-a80f-db57c5488c5f ONLINE 0 0 0
bff8f1ee-2e6b-43e9-a97b-d7be836299ba ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
e123afb5-e233-4c71-b2c3-42e303a35f82 ONLINE 0 0 0
c3a185cf-5b75-45e9-acc7-23896836ca2a ONLINE 0 0 0
52a98154-81eb-4530-aceb-efacec3dac16 ONLINE 0 0 0
38d4a2a4-efe1-4f80-9a07-252332bbf64f ONLINE 0 0 0
e5e46d79-b7ad-48a1-9b2f-db154d41254f ONLINE 0 0 0
For reference, those are all 2TB SSDs.
Apparently this isn't really optimal, and it results in an annoying warning indicator in the GUI vdev status. My hope is that there's a way to fix this that doesn't require me to blow away the entire pool config, dataset config, the attached K8s apps, etc. I couldn't really find anything online, and there isn't really anything in the web GUI that hints at me being allowed to alter the underlying vdevs for the pool. However, if there was a way for me to do this with minimal pain, I assume the workflow would be something like 1. rsync everything to my backup NAS, 2. bring down the zpool 3. Destroy and re-create the vdevs 4. copy the data back.
I assume the most optimal arrangement would be two raidz1 vdevs of 4 SSDs each. The
plex
label should give you an idea of the I/O patterns, if that matters. Lots of large files, occasional writing, frequent reading. The Plex is running via a K8s ap, with two datasets mounted into the container.So, here's my questions:
- Is there a (relatively) easy way to do this that doesn't involve me basically deleting the entire pool, all the datasets, and rebuilding everything?
- Is it worth the effort?
- Assuming the answer to the above is "yes" is the planned arrangement of two 4-disk RAIDZ1 data Vdevs the way to go?
Other system info, if it matters:
OS Version:TrueNAS-SCALE-22.12.3.3
Product:Super Server
Model:AMD EPYC 7302 16-Core Processor
Memory:63 GiB
storage dataset query id,type,used,available,usedbychildren,pool
+----------------------+------------+------+--------+----------------+-----------+
| id | type | pool | used | usedbychildren | available |
+----------------------+------------+------+--------+----------------+-----------+
| plex | FILESYSTEM | plex | <dict> | <dict> | <dict> |
| plex/plex-config | FILESYSTEM | plex | <dict> | <dict> | <dict> |
| plex/plex-media | FILESYSTEM | plex | <dict> | <dict> | <dict> |
| plex/ix-applications | FILESYSTEM | plex | <dict> | <dict> | <dict> |
+----------------------+------------+------+--------+----------------+-----------+