SOLVED how to cloud sync existing snapshots

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I can't figure out how to set up a cloud sync task that uses the existing snapshots of my pool/datasets. I am clearly missing something, but don't know what :frown:
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
You'll either need to use a target like rsync.net (and use a replication task over SSH) or use the .zfs directory structures to sync from in a cloud sync task.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
You'll either need to use a target like rsync.net (and use a replication task over SSH) or use the .zfs directory structures to sync from in a cloud sync task.
My plan was to copy the existing snapshot directory of my dataset with a Cloud Sync Task, however I can't seem to find It.
Should be similar ti the second option you proposed if I understood correctly.
Be advised that I am just starting to explore TrueNas and ZFS in general. Patience warning.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
OK, so the "." at the beginning of .zfs is telling the system that this directory wants to be "hidden", so the browsing interface for the cloud sync task won't show it to you.

You can either convince yourself of the path and find it at the shell like this:

cd /mnt/tank/dataset/.zfs

Or go to a path where you have a snapshot happening and manually add the /.zfs to the end of the path in the browse box.

I have no idea if that will work (allow you to save the task or actually sync), but it's the only way to get to a hidden directory.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
OK, so the "." at the beginning of .zfs is telling the system that this directory wants to be "hidden", so the browsing interface for the cloud sync task won't show it to you.

You can either convince yourself of the path and find it at the shell like this:

cd /mnt/tank/dataset/.zfs

Or go to a path where you have a snapshot happening and manually add the /.zfs to the end of the path in the browse box.

I have no idea if that will work (allow you to save the task or actually sync), but it's the only way to get to a hidden directory.
Thank you, I will try.
Also while we're here, I'm going to go ahead and assume that since you're new to ZFS, you're probably new to the concept(s) of snapshots too (since you refer to the live filesystem as "existing snapshot").

Some suggested reading for you.
Nice, I really needed it.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I have no idea if that will work (allow you to save the task or actually sync), but it's the only way to get to a hidden directory.
It didn't work. Setting the snapshot directory to "visible" in the dataset options doesn't help.
[EFAULT] Transferred: 0 B / 0 B, -, 0 B/s, ETA - Errors: 1 (retrying may help) Elapsed time: 1.7s 2022/08/02 15:19:17 Failed to copy: failed to read directory entry: readdirent /mnt/alpha/anime/.zfs: invalid argument
Code:
Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 355, in run
    await self.future
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 391, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/cloud_sync.py", line 1105, in sync_onetime
    await self._sync(cloud_sync, options, job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/cloud_sync.py", line 1127, in _sync
    await rclone(self.middleware, job, cloud_sync, options["dry_run"])
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/cloud_sync.py", line 242, in rclone
    raise CallError(message)
middlewared.service_exception.CallError: [EFAULT] Transferred:                 0 B / 0 B, -, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:         1.7s

2022/08/02 15:19:17 Failed to copy: failed to read directory entry: readdirent /mnt/alpha/anime/.zfs: invalid argument
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
OK, and you're able to find it at the shell in that exact path though?

cd /mnt/alpha/anime/.zfs

would mean that you're taking the snapshot at alpha/anime (dataset) and that it's not just a directory in the pool root dataset.

perhaps having a look at zfs list -t snap | grep anime would show that clearly
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
OK, and you're able to find it at the shell in that exact path though?
I am.
Code:
root@truenas[/mnt/alpha/anime/.zfs/snapshot]# ls
auto-2022-07-27_05-00   auto-2022-07-28_06-00
auto-2022-07-27_16-00   auto-2022-08-01_00-00

perhaps having a look at zfs list -t snap | grep anime would show that clearly
Code:
alpha/anime@auto-2022-07-27_05-00                                                  280K      -      294G  -
alpha/anime@auto-2022-07-27_16-00                                                   56K      -      320G  -
alpha/anime@auto-2022-07-28_06-00                                                    0B      -      320G  -
alpha/anime@auto-2022-08-01_00-00                                                    0B      -      320G  -
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
OK, so that seems clear. It can't be done with a cloud sync task.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
You may (or may not) be able to reproduce that result (or a better one) by using the CLI tool behind the cloud sync tasks... rclone (which you could launch from a cron task once you get it right... assuming it works)

Just tested it myself... rclone seems to be limited by the filesystem boundary behind the .zfs directory.
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
You may (or may not) be able to reproduce that result (or a better one) by using the CLI tool behind the cloud sync tasks... rclone (which you could launch from a cron task once you get it right... assuming it works)
Well, I don't think I am going to do that.
First and foremost, I don't have the required knowledge to mess around with ther CLI without breaking something.
Second, I don't have the required knowledge or confidence to try something not possible with the GUI and that none pioneered before, potentially slamming against hard limitations (if there is no option in the GUI, I suppose it's for a reason).
Finally, now that I have a better understanding of what a snapshot is, I don't think it would get me my desired objective: having a cloud "archive" of the snapshots that would be automatically deleted on the Nas after the end of their lifecycle, but it doesn't seem to be feasible within the 200GB of free space available on MEGA.
Thank you for your help and patience, I learned something.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
I would certainly agree that with an approach that would essentially need to copy everything each time (meaning for each snapshot, a full copy of the filesystem would be sent), you'd be out of space allocation in no-time.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Using a tool like Duplicati, you may get closer to what you want by using incremental backups. (it supports targeting mega and other clouds)
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Using a tool like Duplicati, you may get closer to what you want by using incremental backups. (it supports targeting mega and other clouds)
I will look into it, thanks.
Edit: when it returns back online, rn their website is down :grin:
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
I will look into it, thanks.
There's a community plugin jail already there, so you can run it up the flagpole with little effort.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
There's a community plugin jail already there, so you can run it up the flagpole with little effort.
Hurrah for our amazing community.
 
Top