Splitting dataset into two child datasets

neofusion

Contributor
Joined
Apr 2, 2022
Messages
159
I have a current dataset that looks something like this:
Code:
/rootdataset/childdatasetA


I would like to split the contents of childatasetA into two new datasets like so:
Code:
/rootdataset/childdatasetA/childdataset1
/rootdataset/childdatasetA/childdataset2


Is there a way to do that without a lengthy cp/mv/replication/rsync? childdatasetA currently consists of roughly 15 TB of data and would be split into two parts of roughly equal size.

Compression, record size, etc will stay the same. The reason for the split is because I've concluded that some contents currently on childdatasetA would be better served by having access to different snapshot and replication tasks.
 
Joined
Oct 22, 2019
Messages
3,641
Is there a way to do that without a lengthy cp/mv/replication/rsync?
Datasets are individual filesystems. To populate a brand new, empty dataset, you need to copy data onto it.


The reason for the split is because I've concluded that some contents currently on childdatasetA would be better served by having access to different snapshot and replication tasks.
This is why planning ahead before making a pool goes a long way. :cool: I know how you feel. Trust me.


UPDATE: However, you might be... in luck? Maybe? Sort of? Since you're on SCALE, you may in fact be on OpenZFS 2.2.x, which has block-clone support. (This is a "pool-wide" feature.) Hence, a copy operation should technically happen very quickly. (Metadata operations, no userdata needs to be copied.)

TrueNAS might be using a parameter that disables block-cloning (for precautionary reasons in the meantime), but it's possible to re-enable it temporarily for this migration, and then re-disable it again when you're done.
 

neofusion

Contributor
Joined
Apr 2, 2022
Messages
159
Thank you, I knew I had read something about it but my googlefu failed me.

Theoretically I would be able to do a sudo zpool set feature@block_cloning=enabled poolname?
What I read suggests this is a one-way thing, I won't be able to disable it unless I recreate the pool.

But hopefully that corruption event is in the past and everything is fine now... hopefully.
Is this a really bad idea?

Btw, I am current in SCALE, running Cobia 23.10.2.
Edit: I found this useful summary on what needs to be done to activate it and how to actually make use of it.
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
Theoretically I would be able to do a sudo zpool set feature@block_cloning=enabled poolname?
That enables it as a pool feature, which means once you start using block-cloning, you can no longer import your pool into an older system using an older version of ZFS.


However, upstream is using yet another (newly created) parameter that temporarily disables block-cloning, even if the pool has been upgraded to support this feature.

You can check if it's enabled with:
Code:
cat /sys/module/zfs/parameters/zfs_bclone_enabled
1 = enabled
0 = disabled


But hopefully that corruption event is in the past and everything is fine now... hopefully.
Is this a really bad idea?
You "should" be safe? I don't believe a one-time copy of a bunch of files can trigger this race condition? (Or if there's another unknown factor that can result with unforeseen "zeros" inserted into the cloned files).

Even after concluding it's rare, upstream OpenZFS has this parameter disabled by default (as of 2.2.3) as a precautionary safeguard for the meantime. iXsystems might have intentionally re-enabled this parameter, since they haven't come across any data corruption reports in the wild. (But then again, you can't be sure unless you actually go looking for the corruption, since ZFS will report the blocks as "correct".)
 
Last edited:

neofusion

Contributor
Joined
Apr 2, 2022
Messages
159
After looking at some recent issues in the openzfs github documenting corruption I'll hold off a bit with block cloning. It's a cool feature, and I'll be happy to save time by using it when it's been tested some more. It also seems to be potent at exposing other bugs that have been hiding in rarely used parts of code.

I'll trust iXSsystems to activate it when they feel it's mature enough.

In the meantime I am half way through manually copying the data to the new datasets and expect it to be done sometime tomorrow.
 
Joined
Oct 22, 2019
Messages
3,641
I'll trust iXSsystems to activate it when they feel it's mature enough.
I don't use SCALE, so I can't confirm if they re-enabled it.

Can you check with:
Code:
cat /sys/module/zfs/parameters/zfs_bclone_enabled
1 = enabled
0 = disabled
 

neofusion

Contributor
Joined
Apr 2, 2022
Messages
159
I don't use SCALE, so I can't confirm if they re-enabled it.

Can you check with:
Code:
cat /sys/module/zfs/parameters/zfs_bclone_enabled
1 = enabled
0 = disabled
Apologies, I missed replying to that.
It's disabled by default.
 
Top