Hello,
I am not new to TrueNAS, have been running it for some time now, more or less testing period. Now after couple of months, I would like to go productive, and put all my data on it. Before I do however, I would like to know which configuration would give me best usage and performance.
My system revolves around an ESXi host with various VMs, including TrueNAS, which has 2 SAS2008 HBAs passed through for 10 2TB disks in the server. Up until now, I had a single VDEV RAID-Z2 configured, all standard settings, nothing changed from default. I believe lz4 was on, 128k record size. I deleted that VDEV and starting fresh now. The VDEV will be used solely for either LUNs or fileshares (NFS and/or SMB).
What I'd like to know are couple of things:
1) single disk read/write speed for each of the drives
2) zfs-pool read/write speed
3) whether the speed I am getting is what I should expect
So here is where I need help, how would I go about this? Which commands do I use?
What I did myself:
diskinfo -t /dev/da(x)
I get numbers similar to what tech specs of disks provide.
Were I on Linux, I would mount the disk and use parted to create a partition, then use dd to perform write and read tests like this:
dd if=/dev/zero of=/mnt/xxxx/tmp.dat bs=2048k count=50k
dd if=/mnt/xxxx/tmp.dat of=/dev/null bs=2048k count=50k
(or something like this, didn't try it yet, as I am neither Linux pro nor do I have Linux controlling the HBAs)
How would I go about this under TrueNAS Core, which I currently have running?
Thanks
Kosta
PS. Also been contemplating if to ugprade to Scale, but since I merely want to use LUN, NFS or SMB, I see no benefit.
I am not new to TrueNAS, have been running it for some time now, more or less testing period. Now after couple of months, I would like to go productive, and put all my data on it. Before I do however, I would like to know which configuration would give me best usage and performance.
My system revolves around an ESXi host with various VMs, including TrueNAS, which has 2 SAS2008 HBAs passed through for 10 2TB disks in the server. Up until now, I had a single VDEV RAID-Z2 configured, all standard settings, nothing changed from default. I believe lz4 was on, 128k record size. I deleted that VDEV and starting fresh now. The VDEV will be used solely for either LUNs or fileshares (NFS and/or SMB).
What I'd like to know are couple of things:
1) single disk read/write speed for each of the drives
2) zfs-pool read/write speed
3) whether the speed I am getting is what I should expect
So here is where I need help, how would I go about this? Which commands do I use?
What I did myself:
diskinfo -t /dev/da(x)
I get numbers similar to what tech specs of disks provide.
Were I on Linux, I would mount the disk and use parted to create a partition, then use dd to perform write and read tests like this:
dd if=/dev/zero of=/mnt/xxxx/tmp.dat bs=2048k count=50k
dd if=/mnt/xxxx/tmp.dat of=/dev/null bs=2048k count=50k
(or something like this, didn't try it yet, as I am neither Linux pro nor do I have Linux controlling the HBAs)
How would I go about this under TrueNAS Core, which I currently have running?
Thanks
Kosta
PS. Also been contemplating if to ugprade to Scale, but since I merely want to use LUN, NFS or SMB, I see no benefit.