import disk chunk exceed limit

zhuweijing

Cadet
Joined
Nov 18, 2021
Messages
3
hello everyone,

My version: truenas-scale-22.02-rc.1-1

I encountered a problem when importing a disk with large files more than 100GiB, and the disk fs is NTFS:
Code:
pool.import_ disk
Error: Separator is not found, and chunk exceed the limit


I hane google and I haven't seen relevant information in this forum. Does anyone encounter the same situation? Can you give me some tips?

thank you
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Can you import the disk on another system and write it via NFS/SMB?
 

zhuweijing

Cadet
Joined
Nov 18, 2021
Messages
3
Can you import the disk on another system and write it via NFS/SMB?
Hi,
Yes , I can copy all those files to my windows pc , And on the truenas scale, I can manually mount the disk and successfully copy all files using the cp command like: mount -t /dev/sdg2 /var/run/xxx && cp -r /var/run/xxx/* /mnt/data/
I haven't tried write it via NFS/SMB, because I tried several times and didn't report an error, but it seemed to take a long time, so I interrupted the transmission, I've tried other small files that can be transferred through SMB, so the SMB service seems to have no problem.

Thank you for your reply~
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
@zhuweijing
Sounds like you have found a bug... please report it unless someone can identify a solution.
 

tony199555

Cadet
Joined
Nov 24, 2021
Messages
3
Hi, I have this same exact problem, too.
In my case, I have a lot of small files and a single 83G qcow2 file. It was a system drive for Windows and I just need to get a copy of the drive.
Not sure how to migrate data and I tried the zhuweijing's method to no avail.
Thank you for helping in advance.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
I tried the zhuweijing's method to no avail.
Did you identify the right partition on the right disk in following that command? I guess using lsblk would help to do that.
 

tony199555

Cadet
Joined
Nov 24, 2021
Messages
3
Did you identify the right partition on the right disk in following that command? I guess using lsblk would help to do that.
Yes. After giving it a couple more tries, I decided to use cp to copy that single file to my dest folder, without any issue so far.
 

theSovereign

Cadet
Joined
Nov 30, 2021
Messages
2
I just ran into this issue as well, attempting to use TrueNAS' native import function to import an NTFS partition from a 4TB HGST via USB, with about 3.7TB of data to copy.

I wasn't able to use the mount command as described above. Instead, I needed to invoke kldload. My external storage source device was recognized as /dev/da0; the NTFS data was on /dev/da0p2.

# kldload fuse # mkdir /mnt/tmp # ntfs-3g -o ro /dev/da0p2 /mnt/tmp # cp -r -n /mnt/tmp /mnt/[I]my_destination_path[/I]

So far I've had the copy process error out after ~178GB and needed to reboot my TrueNAS server once so far due to alleged disk read timeout. (SuperMicro X10SDV-TLN4F) In order to not re-copy data unnecessarily, I added the -n flag to the above copy command.

During the data transfer, I am periodically using the following command to check on the copy progress.

# echo "" && du -c -h /mnt/[I]my_destination_path[/I] | grep total && date && echo ""

The echo and grep commands keep it fairly legible for me. Sample output from the above command:

# echo "" && du -c -h /mnt/WD\ Red\ 4\ x\ 12TB/Data/Data\ Store/\[\ I\ M\ P\ O\ R\ T\ \] | grep total && date && echo "" 339G total Tue Nov 30 20:12:47 EST 2021 #

I'm sure there are more effective ways to check on the progress, but I'm no *nix guru and this seems to be getting the job done for me so far. Feel free to correct or suggest better methods.
 

tony199555

Cadet
Joined
Nov 24, 2021
Messages
3
I just ran into this issue as well, attempting to use TrueNAS' native import function to import an NTFS partition from a 4TB HGST via USB, with about 3.7TB of data to copy.

I wasn't able to use the mount command as described above. Instead, I needed to invoke kldload. My external storage source device was recognized as /dev/da0; the NTFS data was on /dev/da0p2.

# kldload fuse # mkdir /mnt/tmp # ntfs-3g -o ro /dev/da0p2 /mnt/tmp # cp -r -n /mnt/tmp /mnt/[I]my_destination_path[/I]

So far I've had the copy process error out after ~178GB and needed to reboot my TrueNAS server once so far due to alleged disk read timeout. (SuperMicro X10SDV-TLN4F) In order to not re-copy data unnecessarily, I added the -n flag to the above copy command.

During the data transfer, I am periodically using the following command to check on the copy progress.

# echo "" && du -c -h /mnt/[I]my_destination_path[/I] | grep total && date && echo ""

The echo and grep commands keep it fairly legible for me. Sample output from the above command:

# echo "" && du -c -h /mnt/WD\ Red\ 4\ x\ 12TB/Data/Data\ Store/\[\ I\ M\ P\ O\ R\ T\ \] | grep total && date && echo "" 339G total Tue Nov 30 20:12:47 EST 2021 #

I'm sure there are more effective ways to check on the progress, but I'm no *nix guru and this seems to be getting the job done for me so far. Feel free to correct or suggest better methods.
I think rsync would be a better option for lots of files. Also, it shows the progress while transferring data. You could google it, tutorials are everywhere.
 

theSovereign

Cadet
Joined
Nov 30, 2021
Messages
2
Thanks for the suggestion, I'll look into rsync.

In the meantime, my copy operation is still going strong...

2.8T total
Wed Dec 1 06:15:58 EST 2021
 
Top