Removing a disc

Tasmana

Dabbler
Joined
Jul 26, 2020
Messages
25
I have a pool that I expanded with a disk. Now the need for additional disk space has disappeared. How can I remove this drive from the pool?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Output of zpool status, please.
 

Tasmana

Dabbler
Joined
Jul 26, 2020
Messages
25
disks are primed vmvare
If you disable the pool and remove the disk(da2p2)
Output of zpool status, please.

from the vmvare, and then turn it on, the pool will fall apart and will not be displayed
 

Attachments

  • pool_status.png
    pool_status.png
    28.6 KB · Views: 625

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
This is a pool consisting of two single disk vdevs, so you cannot remove a disk without destroying the pool.

P.S. Next time please enter the command zpool status as asked and copy & paste the resulting text output into a "code" block. Thanks!
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
you cannot remove a disk without destroying the pool

Great guidance, but for reasons of "why not make this pool worse than it already is" :eek:, they could "zpool remove" the disk, thus greatly increasing RAM usage, then "zpool attach" it to the existing disk, thus creating redundancy. A bit heavy on the RAM, but redundancy added, so, is it much worse?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
@Yorick Sorry, I don't get at all what you are suggesting. Isn't this a pool consisting of two single disk vdevs? In that case removal is not possible.
 

Tasmana

Dabbler
Joined
Jul 26, 2020
Messages
25
This is a pool consisting of two single disk vdevs, so you cannot remove a disk without destroying the pool.

P.S. Next time please enter the command zpool status as asked and copy & paste the resulting text output into a "code" block. Thanks!
Thank you for your comment! Like this?


But how can I create a pool of 2+ disks. That would be able to map them in the future?
pool_status.png
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Thank you for your comment! Like this?
Almost. Why are you posting a picture? Copy and paste the text. Like this:
Code:
root@freenas-pmh[~]# zpool status ssd
  pool: ssd
 state: ONLINE
  scan: scrub repaired 0B in 00:36:44 with 0 errors on Sat Sep 26 17:16:47 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    ssd                                             ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/8d299abe-e22e-11ea-9ee7-ac1f6b76641c  ONLINE       0     0     0
        gptid/0c661dcc-e247-11ea-b73e-ac1f6b76641c  ONLINE       0     0     0

errors: No known data errors


But back to your question: what are you trying to achieve? Of course you can create a pool of more than one disk. But you have to consider the level of redundancy and performance constraints, first. A pool consists of vdevs and in general you cannot remove a vdev afterwards. Because if that was possible ZFS would need to move data around for removal. And ZFS does not needlessly write data that is already on stable storage. What is written in a certain place stays there unless it is changed in some way. Then the changed data is written to a new place (copy-on-write) and the old space freed.

A vdev consiste of one or more disks that can be configured as single disks, mirrors, various RAIDZn levels ...

But you always need to plan ahead. Adding vdevs is easy, but adding and removing disks at will is not a usage scenario that ZFS is designed for.

You might want to read this:

HTH,
Patrick
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Why wouldn’t removal be possible? Top level vdevs can be removed, as long as they are not raidz. Single disk is a special case of mirror under the hood. I’ll test to make sure and ... I think this should work.

it will eat RAM.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
OK, I see:
Code:
root@freenas-pmh[/mnt/hdd]# zpool remove testpool /mnt/hdd/disk1
root@freenas-pmh[/mnt/hdd]# zpool status testpool               
  pool: testpool
 state: ONLINE
remove: Removal of vdev 0 copied 37K in 0h0m, completed on Fri Oct  2 14:05:08 2020
    120 memory used for removed device mappings


When was this feature introduced? This is complete news to me, I am almost 100% sure there was a time when removal of anything was fundamentally impossible, because ZFS would never copy data around ...
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Works exactly as expected. Now, just "attach" in CLI is obviously a bad idea because gptid and such. TrueNAS Core could do an "extend" from UI for the disk (not the pool) instead, which creates that mirror.


Code:
root@truenas[~]# truncate -s 1T zfs-sparse-0
root@truenas[~]# truncate -s 1T zfs-sparse-1

root@truenas[~]# zpool create BadIdeaMan /root/zfs-sparse-0 /root/zfs-sparse-1
root@truenas[~]# zpool status
  pool: BadIdeaMan
 state: ONLINE
config:

        NAME                  STATE     READ WRITE CKSUM
        BadIdeaMan            ONLINE       0     0     0
          /root/zfs-sparse-0  ONLINE       0     0     0
          /root/zfs-sparse-1  ONLINE       0     0     0
          
root@truenas[~]# zpool remove BadIdeaMan /root/zfs-sparse-1
root@truenas[~]# zpool status
  pool: BadIdeaMan
 state: ONLINE
remove: Removal of vdev 1 copied 63K in 0h0m, completed on Fri Oct  2 08:09:35 2020
    216 memory used for removed device mappings
config:

        NAME                  STATE     READ WRITE CKSUM
        BadIdeaMan            ONLINE       0     0     0
          /root/zfs-sparse-0  ONLINE       0     0     0

errors: No known data errors

root@truenas[~]# zpool attach BadIdeaMan /root/zfs-sparse-0 /root/zfs-sparse-1
root@truenas[~]# zpool status
  pool: BadIdeaMan
 state: ONLINE
  scan: resilvered 278K in 00:00:00 with 0 errors on Fri Oct  2 08:10:17 2020
remove: Removal of vdev 1 copied 63K in 0h0m, completed on Fri Oct  2 08:09:35 2020
    216 memory used for removed device mappings
config:

        NAME                    STATE     READ WRITE CKSUM
        BadIdeaMan              ONLINE       0     0     0
          mirror-0              ONLINE       0     0     0
            /root/zfs-sparse-0  ONLINE       0     0     0
            /root/zfs-sparse-1  ONLINE       0     0     0

errors: No known data errors
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
ZFS still doesn't copy data around, not really. I mean it does but it doesn't. There's an indirection layer, and that's where all the used RAM comes in.

This was added ... a year ago? Little less? Was a big to-do when it landed. Singles and mirrors only, but still, this has saved people's bacon.

RAM use decreases as files / blocks that were "moved but not moved" are deleted or changed, because CoW.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I'd say, in order of "best to wurst":

- Blow pool away and start over with mirror - fine if data can be copied
- Remove second vdev and make first vdev a mirror - works, needs RAM, make sure you have enough
- Remove second vdev and keep running without redundancy - don't do that
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
The intro that @Patrick M. Hausen linked is a good one. Here's one with pretty pictures: https://arstechnica.com/information...01-understanding-zfs-storage-and-performance/

TL;DR: A ZFS pool has one or more vdevs. Redundancy exists exclusively at vdev level. You created a pool with two non-redundant, single-disk vdevs. Removing the second vdev and then making the first vdev a mirror vdev is possible, ZFS will use RAM to make it happen. You likely want at least 16GiB, better 32GiB, in that server.
 

Tasmana

Dabbler
Joined
Jul 26, 2020
Messages
25
Almost. Why are you posting a picture? Copy and paste the text. Like this:
Code:
root@freenas-pmh[~]# zpool status ssd
  pool: ssd
state: ONLINE
  scan: scrub repaired 0B in 00:36:44 with 0 errors on Sat Sep 26 17:16:47 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    ssd                                             ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/8d299abe-e22e-11ea-9ee7-ac1f6b76641c  ONLINE       0     0     0
        gptid/0c661dcc-e247-11ea-b73e-ac1f6b76641c  ONLINE       0     0     0

errors: No known data errors


But back to your question: what are you trying to achieve? Of course you can create a pool of more than one disk. But you have to consider the level of redundancy and performance constraints, first. A pool consists of vdevs and in general you cannot remove a vdev afterwards. Because if that was possible ZFS would need to move data around for removal. And ZFS does not needlessly write data that is already on stable storage. What is written in a certain place stays there unless it is changed in some way. Then the changed data is written to a new place (copy-on-write) and the old space freed.

A vdev consiste of one or more disks that can be configured as single disks, mirrors, various RAIDZn levels ...

But you always need to plan ahead. Adding vdevs is easy, but adding and removing disks at will is not a usage scenario that ZFS is designed for.

You might want to read this:

HTH,
Patrick

Oh, sorry, habit ..
I want to migrate user file storage from WinServ to FreeNAS. The choice fell immediately on FreeNAS.
I liked most of the work done. But the fact that I could not turn off the drive alarmed me.
There are still 12 terabytes in stock, but what if the FreeNAS pool grows, and at one point I need to present the disk space to another virtual machine (of course, if there is more than one disk there). I will study more. Apparently I need to devote more time to details...
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Apparently I need to devote more time to details...

Very much so. ZFS rewards planning ahead and punishes not doing that.

You can also keep your pool as is and just attach another drive to each vdev, making them into mirror vdevs. You gain redundancy and keep the current capacity.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
@Tasmana you say VM - is this block storage for a hypervisor, via iSCSI or nfs? If so, read the path to success with block storage sticky. Among other pearls of wisdom, it explains why you really don’t want to go above roughly 50% full in your pool if using block storage.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
This is a pool consisting of two single disk vdevs, so you cannot remove a disk without destroying the pool.
Yes, you can. man zpool and take a look at the remove command. Not available through the GUI AFAIK, but it can be done.

Edit: really should read the whole thread before replying, as I see this was already addressed. I believe it was introduced with 11.2. Unfortunately, the fact that it won't work with a RAIDZn vdev in the pool makes it useless in what would probably be the most commonly-needed scenario.
 

Tasmana

Dabbler
Joined
Jul 26, 2020
Messages
25
@Tasmana you say VM - is this block storage for a hypervisor, via iSCSI or nfs? If so, read the path to success with block storage sticky. Among other pearls of wisdom, it explains why you really don’t want to go above roughly 50% full in your pool if using block storage.

san..

From the latter, I still deleted the disk through the command line, but at the same time the binding to AD broke and the reports stopped working.)
It's good that there was a restore point..
 
Last edited:
Top