Removed jail storage, deleted everything!

Kingflathead

Cadet
Joined
Jun 17, 2017
Messages
6
Hello everyone, I have been setting up a FreeNAS 11 box and just was having a problem with a new jail containing Plex - I set this jail up less than 24 hours ago, and this is perhaps the 3rd attempt or so to get it going, but this is a problem that happened just now.

I saw that a subfolder in a Very Large directory called "Media" was empty. This was unexpected (about 4TB of data missing), so I, thinking that something was going horribly wrong inside the jail, decided to stop it and remove its storage. I had done this before, as far as I know to no ill effect, but this time everything inside the source folder (about 12TB total) simply vanished. Gone. I am freaking out a little bit. I've detached the zpool and I am very afraid to touch it. I have heard that there is some kind of black magic ZFS transaction rollback procedure that may work.

Before anyone says it, I did not have backups going yet, and the first snapshot had yet to take place. The disks that I evacuated all of this data from have been wiped and repurposed already, I finished my import several days ago and since everything seemed stable I considered it safe to do so... more fool me!

Has this condition ever been successfully recovered from? I have not overwritten anything, the system was new enough that there were not yet any client connections to the share. The data *should* still be on-disk but I don't know how to get at it.

Oh, and the volume consists of 4x 10TB spinners in a raidz1 and 4x240GB SSDs, two dedicated to each kind of ZFS cache. If any more details / logs are needed please let me know and I will produce them immediately.
 

Kingflathead

Cadet
Joined
Jun 17, 2017
Messages
6
I've been looking around - it seems like I need to use the "zpool import -T" option and specify an old txd from before the deletion. I still do not know how to query the filesystem for valid txds that I can try, but I've found a very hazardous looking Python script at https://gist.github.com/jshoward/5685757 that does this by erasing newer copies of the uberblocks... this seems a bit ill-advised, but I really want that data back and it looks like it dumps a list of txds before it actually does anything... seems like that might be useful, but I don't want to export / import this pool a bunch of times and overwrite all of my older txds in doing so. I don't know enough about ZFS to really get that deep into it yet.

I'm going to sleep now and try to figure this out some more in the morning.
 

Kingflathead

Cadet
Joined
Jun 17, 2017
Messages
6
So I have found that I can extract some information about txgs from "zpool history -i primary | grep txg", I have the zpool imported read-only at the moment... here's the last few entries:

2017-06-18.00:09:38 [txg:288810] inherit primary/.system/rrd-66311c036e824820af4
4b2dbf4c55f10 (206) mountpoint=/
2017-06-18.00:09:38 [txg:288811] inherit primary/.system/samba4 (67) mountpoint=
/
2017-06-18.00:09:38 [txg:288812] inherit primary/.system/syslog-530fc3acab424f40
b24e1db3a7827a67 (73) mountpoint=/
2017-06-18.00:09:38 [txg:288813] inherit primary/.system/syslog-66311c036e824820
af44b2dbf4c55f10 (154) mountpoint=/
2017-06-18.00:09:38 [txg:288814] inherit primary/Data (48) mountpoint=/
2017-06-18.00:09:38 [txg:288815] inherit primary/jails (97) mountpoint=/
2017-06-18.00:09:38 [txg:288816] inherit primary/jails/.warden-template-standard
(123) mountpoint=/
2017-06-18.00:09:39 [txg:288817] inherit primary/jails/Plex (234) mountpoint=/
2017-06-18.00:09:39 [txg:288818] set primary (21) aclmode=3
2017-06-18.00:09:39 [txg:288819] set primary (21) aclinherit=3
2017-06-18.00:09:44 [txg:288820] set primary/.system (55) mountpoint=legacy
2017-06-18.00:11:20 [txg:288840] clone primary/jails/.warden-template-standard-c
lean-clone (218) origin=primary/jails/.warden-template-standard@clean (126)
2017-06-18.00:20:10 [txg:288924] open pool version 5000; software version 5000/5
; uts wednesday.kaitain.net 11.0-STABLE 1100512 amd64
2017-06-18.00:20:10 [txg:288927] import pool version 5000; software version 5000
/5; uts wednesday.kaitain.net 11.0-STABLE 1100512 amd64

...about to try importing 288820 read-only, let's see what happens.
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
You are on the right path with the read-only import of the pool. This is the very first thing you need to do now although I have to tell you from the start that a recovery like this is more miss then hit. So, if it is possible to reacquire this data from any other source (even if it will be time consuming) you may want to skip to that part immediately.

Anyway, we need to know exactly what went wrong in order to try to recover any of your data. You told that there was data missing before you removed the storage. How did you remove the storage from the jail? Did you remove a dataset? So many questions but can you post the complete output of zpool history (within forum code tags to make it readable?) so we know exactly what happened and when?
 

Kingflathead

Cadet
Joined
Jun 17, 2017
Messages
6
I will do so as soon as I am able... this import has been running for about an hour. It looks like it's using a few percent of my CPU and about 100MB of memory, so I assume it's doing *something*, but I need to leave it be for a while until it finishes... I assume it's trying to checksum double-digit TB of data or something like that, so hopefully it'll get there eventually. It's showing as disk-bound uninterruptible in ps, at least, so that's nice?

I'm really hoping that the nonexistent IO load that this thing was under since the incident helps me - there are truly no other backups, the old disks haven't just been reformatted or something like that, they were zero-wiped, so.,. kind of stuck on that one.

As far as the missing data, I am not really sure. It may have happened hours and hours ago (Plex was having some trouble and was claiming that files were missing, but I didn't have time to investigate and when I got home a little while ago I noticed the 4TB of extra free space), and as far as the rest... I stopped the Plex jail, and then went over to Storage and un-mounted its storage that had my Media folder as it's source. I did not remove the dataset or otherwise monkey with the filesystems, and there are lots of other folders on the volume that remain accessible and unaffected.
 

Kingflathead

Cadet
Joined
Jun 17, 2017
Messages
6
It finally mounted, but I didn't go back far enough. Here's the zpool history:

Code:

root@wednesday:~ # zpool history -i primary | grep txg
2017-06-15.01:19:34 [txg:5] create pool version 5000; software version 5000/5; uts freenas.local 11.0-STABLE 1100512 amd64
2017-06-15.01:19:34 [txg:5] set Data (21) compression=15
2017-06-15.01:19:34 [txg:5] set Data (21) aclmode=3
2017-06-15.01:19:34 [txg:5] set Data (21) aclinherit=3
2017-06-15.01:19:34 [txg:5] set Data (21) mountpoint=/Data
2017-06-15.01:19:34 [txg:6] inherit Data (21) mountpoint=/
2017-06-15.01:19:35 [txg:7] create Data/Data (48)
2017-06-15.01:19:35 [txg:8] set Data/Data (48) aclmode=4
2017-06-15.01:19:42 [txg:10] create Data/.system (55)
2017-06-15.01:19:42 [txg:11] set Data/.system (55) mountpoint=legacy
2017-06-15.01:19:42 [txg:12] create Data/.system/cores (61)
2017-06-15.01:19:42 [txg:13] set Data/.system/cores (61) mountpoint=legacy
2017-06-15.01:19:42 [txg:14] create Data/.system/samba4 (67)
2017-06-15.01:19:42 [txg:15] set Data/.system/samba4 (67) mountpoint=legacy
2017-06-15.01:19:42 [txg:16] create Data/.system/syslog-530fc3acab424f40b24e1db3a7827a67 (73)
2017-06-15.01:19:42 [txg:17] set Data/.system/syslog-530fc3acab424f40b24e1db3a7827a67 (73) mountpoint=legacy
2017-06-15.01:19:42 [txg:18] create Data/.system/rrd-530fc3acab424f40b24e1db3a7827a67 (79)
2017-06-15.01:19:42 [txg:19] set Data/.system/rrd-530fc3acab424f40b24e1db3a7827a67 (79) mountpoint=legacy
2017-06-15.01:19:43 [txg:20] create Data/.system/configs-530fc3acab424f40b24e1db3a7827a67 (85)
2017-06-15.01:19:43 [txg:21] set Data/.system/configs-530fc3acab424f40b24e1db3a7827a67 (85) mountpoint=legacy
2017-06-15.01:38:50 [txg:142] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-15.01:38:50 [txg:144] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-15.01:40:47 [txg:148] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-15.01:40:47 [txg:150] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-15.01:43:57 [txg:154] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-15.01:43:58 [txg:156] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-15.02:11:22 [txg:2022] create Data/jails (97)
2017-06-15.09:54:57 [txg:79498] create Data/jails/.warden-template-standard (123)
2017-06-15.09:54:57 [txg:79499] set Data/jails/.warden-template-standard (123) mountpoint=/Data/jails/.warden-template-standard
2017-06-15.09:57:37 [txg:79945] snapshot Data/jails/.warden-template-standard@clean (126)
2017-06-15.09:58:25 [txg:80082] clone Data/jails/Plex (132) origin=Data/jails/.warden-template-standard@clean (126)
2017-06-15.10:10:03 [txg:81724] destroy Data/jails/Plex (132)
2017-06-15.10:12:10 [txg:82042] clone Data/jails/Plex (143) origin=Data/jails/.warden-template-standard@clean (126)
2017-06-15.10:18:00 [txg:83006] destroy Data/jails/Plex (143)
2017-06-15.10:21:23 [txg:83547] clone Data/jails/Plex (151) origin=Data/jails/.warden-template-standard@clean (126)
2017-06-15.23:18:27 [txg:138263] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-15.23:18:27 [txg:138266] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.14:09:44 [txg:264425] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.14:09:45 [txg:264428] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.14:17:45 [txg:264484] open pool version 5000; software version 5000/5; uts wednesday.kaitain.net 11.0-STABLE 1100512 amd64
2017-06-16.14:17:45 [txg:264487] import pool version 5000; software version 5000/5; uts wednesday.kaitain.net 11.0-STABLE 1100512 amd64
2017-06-16.14:19:20 [txg:264502] open pool version 5000; software version 5000/5; uts wednesday.kaitain.net 11.0-STABLE 1100512 amd64
2017-06-16.14:19:20 [txg:264504] import pool version 5000; software version 5000/5; uts wednesday.kaitain.net 11.0-STABLE 1100512 amd64
2017-06-16.14:19:21 [txg:264509] inherit primary (21) mountpoint=/
2017-06-16.14:19:21 [txg:264510] inherit primary/.system (55) mountpoint=/
2017-06-16.14:19:21 [txg:264511] inherit primary/.system/configs-530fc3acab424f40b24e1db3a7827a67 (85) mountpoint=/
2017-06-16.14:19:21 [txg:264512] inherit primary/.system/cores (61) mountpoint=/
2017-06-16.14:19:21 [txg:264513] inherit primary/.system/rrd-530fc3acab424f40b24e1db3a7827a67 (79) mountpoint=/
2017-06-16.14:19:21 [txg:264514] inherit primary/.system/samba4 (67) mountpoint=/
2017-06-16.14:19:22 [txg:264515] inherit primary/.system/syslog-530fc3acab424f40b24e1db3a7827a67 (73) mountpoint=/
2017-06-16.14:19:22 [txg:264516] inherit primary/Data (48) mountpoint=/
2017-06-16.14:19:22 [txg:264517] inherit primary/jails (97) mountpoint=/
2017-06-16.14:19:22 [txg:264518] inherit primary/jails/.warden-template-standard (123) mountpoint=/
2017-06-16.14:19:22 [txg:264519] inherit primary/jails/Plex (151) mountpoint=/
2017-06-16.14:19:22 [txg:264520] set primary (21) aclmode=3
2017-06-16.14:19:22 [txg:264521] set primary (21) aclinherit=3
2017-06-16.14:19:26 [txg:264522] set primary/.system (55) mountpoint=legacy
2017-06-16.14:20:14 [txg:264532] destroy primary/jails/Plex (151)
2017-06-16.14:33:48 [txg:264847] clone primary/jails/Plex (205) origin=primary/jails/.warden-template-standard@clean (126)
2017-06-16.14:34:04 [txg:264851] destroy primary/jails/Plex (205)
2017-06-16.14:43:07 [txg:264866] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.14:43:08 [txg:264869] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.14:54:09 [txg:265200] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.14:54:09 [txg:265203] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.15:04:46 [txg:265480] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.15:04:47 [txg:265483] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.15:19:01 [txg:265573] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.15:19:02 [txg:265576] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.15:39:24 [txg:265597] open pool version 5000; software version 5000/5; uts freenas.local 11.0-STABLE 1100512 amd64
2017-06-16.15:39:24 [txg:265599] import pool version 5000; software version 5000/5; uts freenas.local 11.0-STABLE 1100512 amd64
2017-06-16.15:39:24 [txg:265604] inherit primary (21) mountpoint=/
2017-06-16.15:39:24 [txg:265605] inherit primary/.system (55) mountpoint=/
2017-06-16.15:39:25 [txg:265606] inherit primary/.system/configs-530fc3acab424f40b24e1db3a7827a67 (85) mountpoint=/
2017-06-16.15:39:25 [txg:265607] inherit primary/.system/cores (61) mountpoint=/
2017-06-16.15:39:25 [txg:265608] inherit primary/.system/rrd-530fc3acab424f40b24e1db3a7827a67 (79) mountpoint=/
2017-06-16.15:39:25 [txg:265609] inherit primary/.system/samba4 (67) mountpoint=/
2017-06-16.15:39:25 [txg:265610] inherit primary/.system/syslog-530fc3acab424f40b24e1db3a7827a67 (73) mountpoint=/
2017-06-16.15:39:25 [txg:265611] inherit primary/Data (48) mountpoint=/
2017-06-16.15:39:25 [txg:265612] inherit primary/jails (97) mountpoint=/
2017-06-16.15:39:26 [txg:265613] inherit primary/jails/.warden-template-standard (123) mountpoint=/
2017-06-16.15:39:26 [txg:265614] set primary (21) aclmode=3
2017-06-16.15:39:26 [txg:265615] set primary (21) aclinherit=3
2017-06-16.15:39:30 [txg:265616] set primary/.system (55) mountpoint=legacy
2017-06-16.15:39:30 [txg:265617] create primary/.system/syslog-66311c036e824820af44b2dbf4c55f10 (154)
2017-06-16.15:39:30 [txg:265618] set primary/.system/syslog-66311c036e824820af44b2dbf4c55f10 (154) mountpoint=legacy
2017-06-16.15:39:30 [txg:265619] create primary/.system/rrd-66311c036e824820af44b2dbf4c55f10 (206)
2017-06-16.15:39:30 [txg:265620] set primary/.system/rrd-66311c036e824820af44b2dbf4c55f10 (206) mountpoint=legacy
2017-06-16.15:39:31 [txg:265621] create primary/.system/configs-66311c036e824820af44b2dbf4c55f10 (212)
2017-06-16.15:39:31 [txg:265622] set primary/.system/configs-66311c036e824820af44b2dbf4c55f10 (212) mountpoint=legacy
2017-06-16.15:45:58 [txg:265681] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.15:45:58 [txg:265684] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.15:51:28 [txg:265701] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-16.15:51:28 [txg:265704] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-17.12:17:07 [txg:280200] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-17.12:17:07 [txg:280203] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-17.12:55:47 [txg:280760] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-17.12:55:48 [txg:280763] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-17.13:01:49 [txg:280800] open pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-17.13:01:50 [txg:280803] import pool version 5000; software version 5000/5; uts  11.0-STABLE 1100512 amd64
2017-06-17.13:06:09 [txg:280856] clone primary/jails/Plex (218) origin=primary/jails/.warden-template-standard@clean (126)
2017-06-17.13:10:56 [txg:280918] destroy primary/jails/Plex (218)
2017-06-17.13:11:56 [txg:280930] clone primary/jails/Plex (226) origin=primary/jails/.warden-template-standard@clean (126)
2017-06-17.13:12:51 [txg:280943] destroy primary/jails/Plex (226)
2017-06-17.13:13:17 [txg:280949] clone primary/jails/Plex (234) origin=primary/jails/.warden-template-standard@clean (126)

 

styno

Patron
Joined
Apr 11, 2016
Messages
466
Just to be sure, you are not mixed up between Data and primary/Data?
Can we see the output of zpool status & zfs list and can you comment on where you dropped the data (and where it is gone now)?
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
I find this whole thread very odd. Removing storage from a jail would not cause data to be deleted. You've clearly inadvertently done something. Your only recourse at this time is to recover from backup or snapshot (if you had one).
 

Kingflathead

Cadet
Joined
Jun 17, 2017
Messages
6
Right, I figured it out. My data is gone forever, unfortunately. It appears that when a jail is deleted (as is sometimes done in troubleshooting a brand-new jail), everything in it, INCLUDING DATA IN MOUNT POINTS, is also recursively deleted. This is not expected behavior, and there are not warnings about this in the GUI. I have lost well over a decade of files... some of it can never be replaced. I can't mount transaction logs that far back and of course snapshots didn't help on a brand new instance of FreeNAS. The only reason it appeared that I still had anything is that Windows didn't re-enumerate the folders until I tried to drill down to them, as soon as I did the free space count adjusted - downward - 12TB worth.

Please, will someone change the jail delete behavior to unmount and *then* delete, or to insert an explicit warning that everything in any mount point attached will be destroyed? I've been running networks for a long time and data-destructive events are usually accompanied by a fairly stern warning. In this case, it may not be a technical failure, but the UX needs some love.
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
First of all, sorry for your dataloss. That su**s bigtime
Second, ofcourse snapshots would've helped. You set them up from the beginning, hourly, recursive and keep them for at least two weeks.
Third, I don't believe the data in the mounts is scratched: the jail in question is stopped and the jail dataset is deleted.
IF what you describe is the case and you can reproduce it, that is major reason for concern and should be addressed immediately. But tbh I've never encountered that behavior and I have deleted many many many jails with mounted datasets.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Right, I figured it out. My data is gone forever, unfortunately. It appears that when a jail is deleted (as is sometimes done in troubleshooting a brand-new jail), everything in it, INCLUDING DATA IN MOUNT POINTS, is also recursively deleted. This is not expected behavior, and there are not warnings about this in the GUI. I have lost well over a decade of files... some of it can never be replaced. I can't mount transaction logs that far back and of course snapshots didn't help on a brand new instance of FreeNAS. The only reason it appeared that I still had anything is that Windows didn't re-enumerate the folders until I tried to drill down to them, as soon as I did the free space count adjusted - downward - 12TB worth.

Please, will someone change the jail delete behavior to unmount and *then* delete, or to insert an explicit warning that everything in any mount point attached will be destroyed? I've been running networks for a long time and data-destructive events are usually accompanied by a fairly stern warning. In this case, it may not be a technical failure, but the UX needs some love.
I find this very doubtful. If this was the case we would have plenty more people complaining of data loss. I've deleted plenty of jails with storage mounted and it hasn't touched the data inside the dataset.

It would be helpful to know what version of FreeNAS you're currently running.
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
Just to be sure, I deleted 4 jails on 11.0-RELEASE. Two jails deleted from the jail tab, two jails from the plugins tab. Additional datasets are mounted read-write.
The data is still there. I then noticed that the storage paths to the jails were not deleted and I deleted those as well. Once again, the data is still there.

No clue what you did, but removing jails is not deleting the data in the attached datasets.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Can you post the output of "zpool list" and "zfs list"

When someone claims their data has gone missing, there have been multiple times where it has turned out to be a case of a dataset hiding a folder.

If you did indeed delete a dataset, snapshots wouldn't have saved that data.
 
Joined
Oct 1, 2019
Messages
4
So I have the exact problem on 11.2-U6. I delete a jail in the new UI and the data set and all auto snapshots were deleted.


Code:
% sudo zpool history -i volume1 | grep plex/root@auto-20190930
2019-09-30.09:00:03 [txg:25925915] snapshot volume1/iocage/jails/plex/root@auto-20190930.0900-4w (151490)
            volume1/iocage/jails/plex/root@auto-20190930.0900-4w
2019-09-30.10:00:03 [txg:25927055] snapshot volume1/iocage/jails/plex/root@auto-20190930.1000-4w (357)
            volume1/iocage/jails/plex/root@auto-20190930.1000-4w
2019-09-30.11:16:03 [txg:25928175] snapshot volume1/iocage/jails/plex/root@auto-20190930.1116-4w (671)
            volume1/iocage/jails/plex/root@auto-20190930.1116-4w
2019-09-30.12:16:02 [txg:25929324] snapshot volume1/iocage/jails/plex/root@auto-20190930.1216-4w (989)
            volume1/iocage/jails/plex/root@auto-20190930.1216-4w
2019-09-30.13:16:02 [txg:25930360] snapshot volume1/iocage/jails/plex/root@auto-20190930.1316-4w (4302)
            volume1/iocage/jails/plex/root@auto-20190930.1316-4w
2019-09-30.14:16:02 [txg:25931179] snapshot volume1/iocage/jails/plex/root@auto-20190930.1416-4w (4546)
            volume1/iocage/jails/plex/root@auto-20190930.1416-4w
2019-09-30.15:16:02 [txg:25932321] snapshot volume1/iocage/jails/plex/root@auto-20190930.1516-4w (4734)
            volume1/iocage/jails/plex/root@auto-20190930.1516-4w
2019-09-30.16:16:02 [txg:25933310] snapshot volume1/iocage/jails/plex/root@auto-20190930.1616-4w (4998)
            volume1/iocage/jails/plex/root@auto-20190930.1616-4w
2019-09-30.17:16:02 [txg:25934452] snapshot volume1/iocage/jails/plex/root@auto-20190930.1716-4w (5212)
            volume1/iocage/jails/plex/root@auto-20190930.1716-4w
2019-10-01.07:27:10 [txg:25945099] destroy volume1/iocage/jails/plex/root@auto-20190930.1000-4w (357)
2019-10-01.07:27:11 <iocage> zfs destroy volume1/iocage/jails/plex/root@auto-20190930.1000-4w
2019-10-01.07:27:20 [txg:25945103] destroy volume1/iocage/jails/plex/root@auto-20190930.1416-4w (4546)
2019-10-01.07:27:23 <iocage> zfs destroy volume1/iocage/jails/plex/root@auto-20190930.1416-4w
2019-10-01.07:28:06 [txg:25945134] destroy volume1/iocage/jails/plex/root@auto-20190930.1516-4w (4734)
2019-10-01.07:28:07 <iocage> zfs destroy volume1/iocage/jails/plex/root@auto-20190930.1516-4w
2019-10-01.07:28:13 [txg:25945139] destroy volume1/iocage/jails/plex/root@auto-20190930.0900-4w (151490)
2019-10-01.07:28:14 <iocage> zfs destroy volume1/iocage/jails/plex/root@auto-20190930.0900-4w
2019-10-01.07:28:20 [txg:25945144] destroy volume1/iocage/jails/plex/root@auto-20190930.1116-4w (671)
2019-10-01.07:28:21 <iocage> zfs destroy volume1/iocage/jails/plex/root@auto-20190930.1116-4w
2019-10-01.07:29:40 [txg:25945204] destroy volume1/iocage/jails/plex/root@auto-20190930.1716-4w (5212)
2019-10-01.07:29:42 <iocage> zfs destroy volume1/iocage/jails/plex/root@auto-20190930.1716-4w
2019-10-01.07:29:57 [txg:25945217] destroy volume1/iocage/jails/plex/root@auto-20190930.1316-4w (4302)
2019-10-01.07:29:59 <iocage> zfs destroy volume1/iocage/jails/plex/root@auto-20190930.1316-4w
2019-10-01.07:32:23 [txg:25945317] destroy volume1/iocage/jails/plex/root@auto-20190930.1616-4w (4998)
2019-10-01.07:32:24 <iocage> zfs destroy volume1/iocage/jails/plex/root@auto-20190930.1616-4w
2019-10-01.07:32:39 [txg:25945328] destroy volume1/iocage/jails/plex/root@auto-20190930.1216-4w (989)
2019-10-01.07:32:41 <iocage> zfs destroy volume1/iocage/jails/plex/root@auto-20190930.1216-4w


How do I recover the destroyed volumes?
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
I delete a jail in the new UI and the data set and all auto snapshots were deleted.
Just to be clear: you first did the jail delete from the GUI, then went over to the pool, selected the dataset for that jail and did the delete from the GUI as well?
If that is the case, your snapshots were deleted with the dataset as the 'live' inside that dataset. Iirc, the GUI is even warning for this and you have to confirm.
Restore from a backup or data recovery on the pool is your only option.
 
Joined
Oct 1, 2019
Messages
4
No, I only deleted the plug-in jail in the UI. I did not delete the dataset. I was surprised that it disappeared with all the snapshots.
 
Joined
Oct 1, 2019
Messages
4
No, I only deleted the plug-in jail in the UI. I did not delete the dataset. I was surprised that it disappeared with all the snapshots.

My real question, is there a way clone one of the snapshots. I was able to get plex up and running again using an older backup but would like to have the most recent configuration recovered.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
So I have the exact problem on 11.2-U6.
No, this isn't (at least as far as you've shown in your post) remotely the same as the problem in the two-year-old thread you necro'd. The problem in the rest of this thread is that data in a separate dataset mounted to a jail was destroyed when that jail was destroyed. That is not expected behavior, and certainly doesn't seem to be consistent behavior (i.e., nobody else has reported it). What you've shown is that the dataset containing the jail was destroyed when the jail was destroyed, which is exactly the expected behavior.
 
Top