Removing Vdev

huseyinozsut

Dabbler
Joined
Jul 12, 2021
Messages
24
Hi,

I am new to Truenas and I was thinkering around, trying to learn. I have around 30 disks in various sizes. I added them to pool one by one (I wanted a jbod pool). But as I understand, even if they are not paired with any other disk (like raid 5 or Stripe) I can not remove Vdev.

The thing is I have a lot of free space. I want to empty one disk, reduce the size of the Pool and remove the empty Vdev.

I guess this is not possible right?
 

QonoS

Explorer
Joined
Apr 1, 2021
Messages
87
Single disk VDEV should be removable.

Have a look at zpool-remove(8).
"
zpool remove [-npw] pool device...
Removes the specified device from the pool. This command sup-
ports removing hot spare, cache, log, and both mirrored and non-
redundant primary top-level vdevs
, including dedup and special
vdevs.
When the primary pool storage includes a top-level raidz
vdev only hot spare, cache, and log devices can be removed.
"
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
If the pool is a stripe of single drive vdevs (= 1-wide mirrors), any drive could be removed by this command.
Please be aware that your configuration is very dangerous and that the failure of any drive (or an improper use of the remove command) will result in the loss of the whole pool and all its content!
If this is not a learning exercise, and involves valuable data, what you should really do is to move the data out of this pool, delete the pool and build a new pool using redundant vdevs (at least 2-way mirrors or raidz, but preferably raidz2).
 

huseyinozsut

Dabbler
Joined
Jul 12, 2021
Messages
24
If the pool is a stripe of single drive vdevs (= 1-wide mirrors), any drive could be removed by this command.
Please be aware that your configuration is very dangerous and that the failure of any drive (or an improper use of the remove command) will result in the loss of the whole pool and all its content!
If this is not a learning exercise, and involves valuable data, what you should really do is to move the data out of this pool, delete the pool and build a new pool using redundant vdevs (at least 2-way mirrors or raidz, but preferably raidz2).
I learned that it is very dengerous by testing it. I removed one drive and my pool got offline. :grin: Then I panicked and opened this thread. When making the setup, I assumed, if one disk goes bad I would loose just the data inside that disk.

The data inside the pool is somewhat important for one month, if I loose the data it is not a nightmare for me; but I will loose around 20-25 days.

--------------------

I don't have enough spare disk space outside of the pool. So I need to reduce the size of the pool one by one.

I have five 3tb disks outside the pool. So I will create a Raidz (or Raidz2) with them. Copy some files from my first pool and open up some space. Then in my fist pool I will empty a disk to the rest of the pool, then remove the disk and add the disk to my second pool. One by one... It is possible isn't it?

---------------

One last question:

One disk redundancy is ok by me. I want the maximum disk space possible. What I need is actually, having 30 pools with single disk each; but when sharing combining them to be shown as one big disk. So if one disk fails, I am ok to loose that data; but I am not ok to loose whole. Is it possible? At normal conditions, if I owned disks with same space, I would definately go for raidz2. But I have 2TB, 3TB, 4TB and 6TB disks. That means, I will need to setup 4 different pools and loose lots of usable space to redundancy.
 

huseyinozsut

Dabbler
Joined
Jul 12, 2021
Messages
24
Single disk VDEV should be removable.

Have a look at zpool-remove(8).
"
zpool remove [-npw] pool device...
Removes the specified device from the pool. This command sup-
ports removing hot spare, cache, log, and both mirrored and non-
redundant primary top-level vdevs
, including dedup and special
vdevs.
When the primary pool storage includes a top-level raidz
vdev only hot spare, cache, and log devices can be removed.
"
Thank you very much. This is exactly the thing that I need.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
30 pools will be exposed as 30 different "drives". What you want to do require a single pool. Maximal space with one drive redundancy would be raidz1. You can stripe different raidz1 vdevs in the pool. These vdevs would best have the same number of drives (same width). Each raidz1 vdev would best comprise disk of the same size, to avoid wasting space (but if that's not the case, the vdev would expand upon replacing the smaller drive). What you may have missed is that you can stripe together vdevs made with drives of different sizes, for instance 5*3 TB + 5*2 TB in two raidz1 vdevs could make a single pool with 20 TB of raw space.

You can add further raidz vdevs to your pool but not remove them, nor change a raidz1 into raidz2: Advance planning required!

How many drives do you have in each size?

If they come in multiples of 5, you could make a 5-wide raidz1 with the spare 3 TB drives to initiate your new pool. Move some data into it. Remove 5 drives of the same size from your old pool. Make a second 5-wide vdev with these drives and add this vdev to the new pool. Repeat…

You can adapt to different widths if that better fits your distribution of drives. Or go for safer raidz2. But this should be planned from the start.
 

huseyinozsut

Dabbler
Joined
Jul 12, 2021
Messages
24
30 pools will be exposed as 30 different "drives". What you want to do require a single pool. Maximal space with one drive redundancy would be raidz1. You can stripe different raidz1 vdevs in the pool. These vdevs would best have the same number of drives (same width). Each raidz1 vdev would best comprise disk of the same size, to avoid wasting space (but if that's not the case, the vdev would expand upon replacing the smaller drive). What you may have missed is that you can stripe together vdevs made with drives of different sizes, for instance 5*3 TB + 5*2 TB in two raidz1 vdevs could make a single pool with 20 TB of raw space.

You can add further raidz vdevs to your pool but not remove them, nor change a raidz1 into raidz2: Advance planning required!

How many drives do you have in each size?

If they come in multiples of 5, you could make a 5-wide raidz1 with the spare 3 TB drives to initiate your new pool. Move some data into it. Remove 5 drives of the same size from your old pool. Make a second 5-wide vdev with these drives and add this vdev to the new pool. Repeat…

You can adapt to different widths if that better fits your distribution of drives. Or go for safer raidz2. But this should be planned from the start.

Thank you for your answer. I have 12 2TB disks, 7 3TB disks, 5 4TB disks and 5 6TB disks.

I think the best solution for me would be:

1) Create raidz1 pool with 2TB disks
2) Create raidz1 pool with 3TB disks
3) Buy two more 4TB disks and create raidz1 pool with 4TB disks.
4) Create one pool for each 6TB disk (to not to loose disk space).

By this, I will have 8 different network disk space. But that is ok for me for now. In the future, if I want to setup a raidz1 pool with my 6TB disks, then I will need 18TB of free space. Empty 3 6TB disks to that free space, setup raidz1. And keep adding disks...
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
pool <> vdev
Redundancy and data safety is at vdev level. Pools contains as many vdevs as needed to reach the desired space or performance.

If by "create raidz1 pool with 2TB/3TB disks", you mean create a pool with one single vdev with all twelve drives (counting the 3TB spares), be warned that there is practical limit to the width of vdevs, and that 12 drives would reach that limit and be quite unsafe with raidz1. Advisable configurations would rather be
a/ 4 raidz1 vdevs of 3 drives; or
b/ 3 raidz1 vdevs of 4 drives; possibly
c/ 2 raidz1 vdevs of 6 drives.

Your mix of drives does not fit so badly with the design of a single pool of 5-wide raidz1 vdevs.
The 29 active drives provide 95 TB of raw storage with no tolerance to failure (and "a lot of free space" according to your first post). With the 5 spares you have 110 TB.
One pool with z1(5*6TB) + z1(5*4TB) + 2*z1(5*3TB) + 2*z1(5*2TB) provides 70 TB of raw storage, with tolerance to the loss of one drive (and possibly more… as long as the failed drives belong to different vdevs). That leaves two 2 TB and two 3TB drives; buy one more drive of 2 TB or more and this could be an extra raidz1 vdev z1(2+2+3+3+x) (raw capacity of 8TB, expanding to 12 TB by replacing the 2TB drives with 3TB drives or larger).
If 70-78 TB of raw storage space is enough, and your hardware can accomodate 35 drives, you could have that in one single pool, and one single mount point.
 

huseyinozsut

Dabbler
Joined
Jul 12, 2021
Messages
24
pool <> vdev
Redundancy and data safety is at vdev level. Pools contains as many vdevs as needed to reach the desired space or performance.

If by "create raidz1 pool with 2TB/3TB disks", you mean create a pool with one single vdev with all twelve drives (counting the 3TB spares), be warned that there is practical limit to the width of vdevs, and that 12 drives would reach that limit and be quite unsafe with raidz1. Advisable configurations would rather be
a/ 4 raidz1 vdevs of 3 drives; or
b/ 3 raidz1 vdevs of 4 drives; possibly
c/ 2 raidz1 vdevs of 6 drives.

Your mix of drives does not fit so badly with the design of a single pool of 5-wide raidz1 vdevs.
The 29 active drives provide 95 TB of raw storage with no tolerance to failure (and "a lot of free space" according to your first post). With the 5 spares you have 110 TB.
One pool with z1(5*6TB) + z1(5*4TB) + 2*z1(5*3TB) + 2*z1(5*2TB) provides 70 TB of raw storage, with tolerance to the loss of one drive (and possibly more… as long as the failed drives belong to different vdevs). That leaves two 2 TB and two 3TB drives; buy one more drive of 2 TB or more and this could be an extra raidz1 vdev z1(2+2+3+3+x) (raw capacity of 8TB, expanding to 12 TB by replacing the 2TB drives with 3TB drives or larger).
If 70-78 TB of raw storage space is enough, and your hardware can accomodate 35 drives, you could have that in one single pool, and one single mount point.

Thank you for your answer. Then the best approach for me would be to push the limits of available space without increasing the risk of data loss much. Then I will need to create 6-wide raidz1 vdevs.

I will have:
1) two 6-wide raidz1 vdev with my 2TB disks
2) one 6-wide raidz1 vdev with my 3TB disks (and use one left outside this system? Maybe I will sell it)
3) one 6-wide raidz1 vdev with my 4TB disks (will need to buy one more)
4) one 6-wide raidz1 vdev with my 6TB disks (will need to buy one more)

I am not sure about combining them in one pool or not. Each vdev can be one pool (to reduce the risk of total data loss) I am not sure about it. But if I want to use a SSD as cache disk (to increase write speed) then I need all my vdev's in one pool. I have 2.5gbit network for the nas. One 2.5gbit network card is in my nas, one 2.5gbit network card is in my PC. Both my PC and nas is also connected to my router with 1gbit network. So in theory, nas can serve my PC with 2.5gbit speed, and can serve the rest of the network with 1gbit speed.

I have 30 sata ports at total. I have a NVME usb 3.0 disk. Is it safe to use it as boot disk? If not I will have to buy a pci-e (1x) to sata expander. I have one pci-e 1x left on my motherboard. So i can't use a sas hba card. They are PCI-e 4x.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
USB thumb drives are reported to fail under the load that TrueNAS puts on them.
To save a SATA port, the easiest solution is to boot from a small NVMe M.2 disk. If you have no M.2 onboard, there are $3 adapters on eBay to use a M.2 in a PCIe slot.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
But if I want to use a SSD as cache disk (to increase write speed) then I need all my vdev's in one pool.
A ZFS SLOG device doesn't improve write speeds. It's not a write performance cache, it's for safety, to insure that data it contains are written to disk if the system loses power -- that's why SLOG devices need to have built-in batteries. For more details, see:


If you want to increase the read performance of one of your pools, you can attach your SSD to that pool as an L2ARC cache device.
 

huseyinozsut

Dabbler
Joined
Jul 12, 2021
Messages
24
USB thumb drives are reported to fail under the load that TrueNAS puts on them.
To save a SATA port, the easiest solution is to boot from a small NVMe M.2 disk. If you have no M.2 onboard, there are $3 adapters on eBay to use a M.2 in a PCIe slot.
I know thumb drives tend to fail; but i have USB to nvme adapter and i will use nvme disk from usb. I will not use a thumb drive. Can it fail since it is used over USB?
 

QonoS

Explorer
Joined
Apr 1, 2021
Messages
87
I know thumb drives tend to fail; but i have USB to nvme adapter and i will use nvme disk from usb. I will not use a thumb drive. Can it fail since it is used over USB?
Yes, it can fail. The problematic component is the USB-to-NVME-Chip. Those are not made to be used 24/7 and reliability is equally as (bad as) for USB thumb drives.
 

huseyinozsut

Dabbler
Joined
Jul 12, 2021
Messages
24
Yes, it can fail. The problematic component is the USB-to-NVME-Chip. Those are not made to be used 24/7 and reliability is equally as (bad as) for USB thumb drives.
Thank you for the info. I have a very crappy pcie-sata card. It has a single sata port chip on it and it has a sata multiplier chip on it. And it gives 4 sata ports. I tried it and I can use it as a single sata port expender (and does not cause any problems when used with one disk). So I have 31 sata ports right now. And that is the minimum needed for my setup.
 

huseyinozsut

Dabbler
Joined
Jul 12, 2021
Messages
24
Thank you all!!

You literally jumpstarted me. :smile:

I created a new pool with one vdev. I used 6 3TB disks in raidz1 setup. It gave me 12.6TiB space (shouldn't it be 13.65TB, since 3Tb disk gives 2.73TiB and 5*2.73=13.65, not very important but I got curious) . I opened up about 12.5TB space in my other pool and started to remove my first 6TB disk from the pool. With my space in the pool that I want to destroy eventually, I can remove three 6TB disks. I have some disks with badsector which I won't be using. I will hook them up to my pc and use them to open up some more space. My fourth 6TB disk will be removed by help of these bad disks. My other two 6TB disks are not in this pool and empty. So, I will be able to create my second Vdev with six 6TB disks. After that, it will be much more easier to empty some space and remove vdev.

Eventually I will have two 2TB, one 3TB, one 4TB, one 6TB Raidz1 Vdevs, consisting 6 disks each.

I am not sure to combine them in one pool or create one pool for each. What would you say? Is it risky to combine all vdevs in one pool? As I understand, it will not give me any increase in performance. It will just reduce the mess (one big disk, instead of 5 smaller disks). But if two disks in one Vdev goes bad, all of my data is lost. I Could not decide that.

And also, if I change my plans, a Raidz1 Vdev can't be removed from the pool as I understand. If that is the case, then for increased flexibility I will create one pool for each vdev.
 
Last edited:

QonoS

Explorer
Joined
Apr 1, 2021
Messages
87
Pool performance scales with number of VDEVs. Eventually ZFS distributes I/O across VDEVs availabe. So performance will increase if all are in one pool.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I am not sure to combine them in one pool or create one pool for each. What would you say? Is it risky to combine all vdevs in one pool? As I understand, it will not give me any increase in performance. It will just reduce the mess (one big disk, instead of 5 smaller disks). But if two disks in one Vdev goes bad, all of my data is lost. I Could not decide that.
More vdevs will increase I/O performance, thus performance with multiple clients. You're correct that losing two disks in the same vdev will lose the pool. As you understand the risks, you're the only one who can decide (risk vs. convenience).

One more note: It is not possible to remove vdevs with raidz members, so, if you go for one large pool, be very, very careful when adding the new vdevs; any mistake would be irreversible.
 

huseyinozsut

Dabbler
Joined
Jul 12, 2021
Messages
24
More vdevs will increase I/O performance, thus performance with multiple clients. You're correct that losing two disks in the same vdev will lose the pool. As you understand the risks, you're the only one who can decide (risk vs. convenience).

One more note: It is not possible to remove vdevs with raidz members, so, if you go for one large pool, be very, very careful when adding the new vdevs; any mistake would be irreversible.
Ok. So for increased flexibility I will create one pool for each Vdev. Thank you for your help.

There are a few things left. I want to increase my sata inputs. I found LSI 9212 4i-4e card. I have sff-8088 cable. So I can add 8 more disks to this setup (in the future). But, I have one pci-e 1x slot left and LSI cards are pci-e 8x. So after completing my Vdevs, I will try to use my older LSI card (3gbit/s) with PCI-e riser. My riser is pci-e 1x to 16x. If it works, then I will buy this LSI 9212. I don't want to change my mainboard. Mainboards with more than two pci-e 16 slots are extremely expensive in my country. I won't pay 80 dollars (for used) for an ancient lga1155 mainboard.

------------------

I want to setup virtual machine windows 10 also. I have i5-3570 cpu and 16gb ram. I can increase the ram to 24 or 32gb and use 8gb of it for virtual machine. Can I use boot disk of Truenas for windows 10 virtual machine also? Or would I need another ssd for that? I have 128gb Kingston SSD for Truenas boot, and I think 30-40gb is more than enough for Truenas. So, if I can create a partition for the rest (lets say 80GB), that would be more than enough for windows 10.
 
Last edited:
Top