remove drives 1 disk at a time

Status
Not open for further replies.

tlipur

Dabbler
Joined
Jul 9, 2012
Messages
11
Hello folks. id like to remove a couple drives from my server..

i have 3 spare drives that are not is use. but i dont know which ones they are, is there away to identify them in the command prompt so i can remove them.

another thing if i choose to remove a disk from the gui that has data on it, will all the contents of that disk be distributed to the other disk in the nas before its able to be removed or does it just delete the contents on that disk?
 

Caesar

Contributor
Joined
Feb 22, 2013
Messages
114
you could shut the server down, pull the drives and record the serial numbers. view disk will list the serials in the gui. btw there is no way anyone could say that you will or will not lose any data without knowing your configuration. all we know is that you have 3 spare drives.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Hello folks. id like to remove a couple drives from my server..

i have 3 spare drives that are not is use. but i dont know which ones they are, is there away to identify them in the command prompt so i can remove them.

From the GUI, Storage->Volumes->View Disks will show you the serial numbers most of the time.

another thing if i choose to remove a disk from the gui that has data on it, will all the contents of that disk be distributed to the other disk in the nas before its able to be removed or does it just delete the contents on that disk?

No, DON'T DO THIS!! This is not how ZFS works, you will lose data. If you are REPLACING a bad disk, then follow the procedure in the documentation, the disk needs to be taken offline the correct way and the new disks data needs time for the data to be rebuilt (resilver) before the next disk is taken offline. Each disk can take HOURS to resilver depending on your configuration and how much data you have.

http://doc.freenas.org/index.php/Volumes#Replacing_a_Failed_Drive_or_Zil_Device
 

tlipur

Dabbler
Joined
Jul 9, 2012
Messages
11
From the GUI, Storage->Volumes->View Disks will show you the serial numbers most of the time.



No, DON'T DO THIS!! This is not how ZFS works, you will lose data. If you are REPLACING a bad disk, then follow the procedure in the documentation, the disk needs to be taken offline the correct way and the new disks data needs time for the data to be rebuilt (resilver) before the next disk is taken offline. Each disk can take HOURS to resilver depending on your configuration and how much data you have.

http://doc.freenas.org/index.php/Volumes#Replacing_a_Failed_Drive_or_Zil_Device

sweet i have 4 drives that are not in use.. they have no description listed in the box.. all the other drives do which say member of raidz. these disk were for back up... how can i remove them?

I dont have any bad disks and have 2tb of free space not including theses drives.

This may sound confusing but what i wanted to do was to remove these 4 drives from freenas and format each one in windows to copy data to each disk. once those disks were full i was hoping to have a disk from the server empty out so i can then remove that disk and follow suit until all data is written out to each disk that is pulled from the server..

Is there a way to do that without losing data?

Thanks in advance
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Nope. It won't work like you think. The zpool will have to be empty before you remove any disks in the zpool or you will lose data.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Use [code][/code] tags. From an SSH session as root post the output of the following:
Code:

camcontrol devlist

glabel status

gpart show
 

tlipur

Dabbler
Joined
Jul 9, 2012
Messages
11
Code:
[root@freenas] ~# camcontrol devlist
<WDC WD20EADS-00S2B0 01.00A01>     at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD20EARS-22MVWB0 51.0AB51>    at scbus0 target 1 lun 0 (pass1,ada1)
<WDC WD20EARS-00MVWB0 51.0AB51>    at scbus0 target 2 lun 0 (pass2,ada2)
<ST32000542AS CC34>                at scbus0 target 3 lun 0 (pass3,ada3)
<Port Multiplier 37261095 1706>    at scbus0 target 15 lun 0 (pass4,pmp0)
<ST2000DL004 HD204UI 1AQ10001>     at scbus1 target 0 lun 0 (pass5,ada4)
<WDC WD20EADS-00R6B0 01.00A01>     at scbus1 target 1 lun 0 (pass6,ada5)
<SAMSUNG HD204UI 1AQ10001>         at scbus1 target 2 lun 0 (pass7,ada6)
<SAMSUNG HD204UI 1AQ10001>         at scbus1 target 3 lun 0 (pass8,ada7)
<Port Multiplier 37261095 1706>    at scbus1 target 15 lun 0 (pass9,pmp1)
<WDC WD15EADS-00P8B0 01.00A01>     at scbus2 target 0 lun 0 (pass10,ada8)
<SAMSUNG HD103SI 1AG01118>         at scbus3 target 0 lun 0 (pass11,ada9)
<ST31500341AS CC1H>                at scbus4 target 0 lun 0 (pass12,ada10)
<ST31500341AS CC1H>                at scbus4 target 1 lun 0 (pass13,ada11)
<WDC WD15EADS-00P8B0 01.00A01>     at scbus5 target 0 lun 0 (pass14,ada12)
<ST31500341AS CC1H>                at scbus5 target 1 lun 0 (pass15,ada13)
<ST31500341AS CC1H>                at scbus6 target 0 lun 0 (pass16,ada14)
<ST31500541AS CC32>                at scbus7 target 0 lun 0 (pass17,ada15)
< Patriot Memory PMAP> 



Code:
[root@freenas] ~# glabel status
                                      Name  Status  Components
gptid/019b292f-c496-11e1-bb9f-003018ac7583     N/A  ada0p2
gptid/0209bc45-c496-11e1-bb9f-003018ac7583     N/A  ada1p2
gptid/028a810a-c496-11e1-bb9f-003018ac7583     N/A  ada2p2
gptid/02f21011-c496-11e1-bb9f-003018ac7583     N/A  ada3p2
gptid/95e881c3-c6fa-11e1-a774-003018ac7583     N/A  ada4p2
gptid/94d83d02-c6fa-11e1-a774-003018ac7583     N/A  ada5p2
gptid/955c79c3-c6fa-11e1-a774-003018ac7583     N/A  ada6p2
gptid/967f309f-c6fa-11e1-a774-003018ac7583     N/A  ada7p2
gptid/e81b2f93-c170-11e1-9ef8-003018ac7583     N/A  ada8p2
gptid/e87dac71-c170-11e1-9ef8-003018ac7583     N/A  ada10p2
gptid/e8d3d13d-c170-11e1-9ef8-003018ac7583     N/A  ada11p2
gptid/e97e420b-c170-11e1-9ef8-003018ac7583     N/A  ada12p2
gptid/e9e4f1fb-c170-11e1-9ef8-003018ac7583     N/A  ada13p2
gptid/ea421fd6-c170-11e1-9ef8-003018ac7583     N/A  ada14p2
gptid/ea9b72db-c170-11e1-9ef8-003018ac7583     N/A  ada15p2
                             ufs/FreeNASs3     N/A  da0s3
                             ufs/FreeNASs4     N/A  da0s4
                            ufs/FreeNASs1a     N/A  da0s1a


Code:
[root@freenas] ~# gpart show
=>        34  3907029101  ada0  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada1  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada2  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada3  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada4  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada5  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada6  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada7  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  2930277101  ada8  GPT  (1.4T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  2926082703     2  freebsd-zfs  (1.4T)

=>        34  2930277101  ada10  GPT  (1.4T)
          34          94         - free -  (47k)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  2926082703      2  freebsd-zfs  (1.4T)

=>        34  2930277101  ada11  GPT  (1.4T)
          34          94         - free -  (47k)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  2926082703      2  freebsd-zfs  (1.4T)

=>        34  2930277101  ada12  GPT  (1.4T)
          34          94         - free -  (47k)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  2926082703      2  freebsd-zfs  (1.4T)

=>        34  2930277101  ada13  GPT  (1.4T)
          34          94         - free -  (47k)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  2926082703      2  freebsd-zfs  (1.4T)

=>        34  2930277101  ada14  GPT  (1.4T)
          34          94         - free -  (47k)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  2926082703      2  freebsd-zfs  (1.4T)

=>        34  2930277101  ada15  GPT  (1.4T)
          34          94         - free -  (47k)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  2926082703      2  freebsd-zfs  (1.4T)

=>      63  15646657  da0  MBR  (7.5G)
        63   1930257    1  freebsd  [active]  (942M)
   1930320        63       - free -  (31k)
   1930383   1930257    2  freebsd  (942M)
   3860640      3024    3  freebsd  (1.5M)
   3863664     41328    4  freebsd  (20M)
   3904992  11741728       - free -  (5.6G)

=>      0  1930257  da0s1  BSD  (942M)
        0       16         - free -  (8.0k)
       16  1930241      1  !0  (942M)
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
I should have also had you post "zpool status -v", which you can still do, but at the moment it looks like my original assumption was correct, you can't remove those drives. Post the output and that should confirm it.
 

tlipur

Dabbler
Joined
Jul 9, 2012
Messages
11
darn that doesent sound good... lol
Code:
[root@freenas] ~# zpool status -v
  pool: HOME
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        HOME                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/e81b2f93-c170-11e1-9ef8-003018ac7583  ONLINE       0     0     0
            gptid/e87dac71-c170-11e1-9ef8-003018ac7583  ONLINE       0     0     0
            gptid/e8d3d13d-c170-11e1-9ef8-003018ac7583  ONLINE       0     0     0
            gptid/e97e420b-c170-11e1-9ef8-003018ac7583  ONLINE       0     0     0
            gptid/e9e4f1fb-c170-11e1-9ef8-003018ac7583  ONLINE       0     0     0
            gptid/ea421fd6-c170-11e1-9ef8-003018ac7583  ONLINE       0     0     0
            gptid/ea9b72db-c170-11e1-9ef8-003018ac7583  ONLINE       0     0     0
          raidz1-1                                      ONLINE       0     0     0
            gptid/019b292f-c496-11e1-bb9f-003018ac7583  ONLINE       0     0     0
            gptid/0209bc45-c496-11e1-bb9f-003018ac7583  ONLINE       0     0     0
            gptid/028a810a-c496-11e1-bb9f-003018ac7583  ONLINE       0     0     0
            gptid/02f21011-c496-11e1-bb9f-003018ac7583  ONLINE       0     0     0
          raidz1-2                                      ONLINE       0     0     0
            gptid/94d83d02-c6fa-11e1-a774-003018ac7583  ONLINE       0     0     0
            gptid/955c79c3-c6fa-11e1-a774-003018ac7583  ONLINE       0     0     0
            gptid/95e881c3-c6fa-11e1-a774-003018ac7583  ONLINE       0     0     0
            gptid/967f309f-c6fa-11e1-a774-003018ac7583  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        HOME/:<0x0>
        /mnt/HOME/.mkv
        /mnt/HOME/ftp/t.json
[root@freenas] ~#



as for the error message can i purge it.. my external ecloser went haywire and one disk was not being read.. i got it to come back on. it was a yellow status and went back to green...

Thanks for the help
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Sorry to confirm it, but you're kind of stuck, you can't remove those drives without losing data, but the good news is that you didn't do it before asking! :)

You'll have to move everything off before you can reallocate those disks.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Now would be a great time to consider RAIDZ2 or RAIDZ3. :P RAIDZ1 has caused some data loss lately because if you lose 1 disk and another disk has any errors you will lose data. :P
 

tlipur

Dabbler
Joined
Jul 9, 2012
Messages
11
if i do how do i change my setup.. for raid z2 or 3

another thing how would removing disks that dont have a desc loose data there not in use? there just there for back up. I added 4 drives at once and 1 was for back up. added another 4 drives and again one was for back up. added the rest of the drives and had other two drives for back up...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
If you had added drives and created a new Zpool vice adding them to your HOME zpool, you would have been able to remove them. Once you link a drive to a pool, that pool will always require the the drive. It's crazy it's not able to come back out but that is ZFS for you.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Take a moment and read cyberjock's guide, before you post again. There's a link to it in his signature.

I finally understand what you mean by "backup". The better word would be "parity". These parity disks provide fault tolerance to each of the vdev's, in case of single drive failure. And, they do contain data.

---------------

Note to the rest of us, I believe the 3 drives the OP is talking about, are the parity drives in each of his vdev's. I originally though he was talking about one of his vdev's (like the 3rd one).
 
Status
Not open for further replies.
Top