I mounted a NFS device in TrueNAS SCALE, and it's very slow.

Nostal Yuu

Cadet
Joined
Jan 30, 2024
Messages
2
I'm running a TrueNAS on Proxmox VE.

My server spec:
1706640107123.png


And I setup a Linux bridge which didn't connect to any physical network port, this vmbr is just for networking between internal VMs.
My TrueNAS running on 10.0.0.3, and I also have a Ubuntu VM running on 10.0.0.2.
My NFS mount command:
Code:
mount -t nfs -o rsize=8192,wsize=8192 10.0.0.3:/<pathToMyDataset>

I configured MTU to 9000.
1706640389887.png


The question im facing is, the NFS speed is extremely slow compared to Samba.
I use this command to test speed:
Code:
dd if=/dev/zero of=/<pathToMyMountpoint> bs=1M count=512

And I just got ~150MB/s:
1706640565956.png

Compared to Samba:
1706640600391.png


I seached many documents and tried many options but it didn't work at all.
I found the exe time of NFS IO seems to be too long:
1706640754133.png

But i dont know how to fix this. :(
I hope someone can help me, thanks a lot.
 

Nostal Yuu

Cadet
Joined
Jan 30, 2024
Messages
2
I let the rsize and wsize options to be default and disabled sync in TrueNAS Dataset, now the speed increased to ~1.2G/s.
 

Kailee71

Contributor
Joined
Jul 8, 2018
Messages
110
Default options for NFS are usually safe and good these days. Same for MTU, the gains are usually neglible. I think the great big signpost in the sky in this case was the SMB transfers rate. Assuming you're on 10GbE your max throughput is under 1000MiB/s with an _extremely_ fast pool. You were seeing SMB writes being done async to storage, and the NFS write are being done sync (independently of your nfs mount!) as you correctly deduced. I personally will never take the risk with NFS data being async to storage (guarantees and requirements on the client are different between SMB and NFS), and for that reason I got a relatively fast SLOG vdev (RMS-200) which buffers the first 8GiB or so pretty fast. But that is a whole other rabbit hole to dive into.
 
Top