Is there a guide for testing performance of a pool?

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
I have a system with the following specs:


Build: FreeNAS-11.2-U8
CPU: Intel(R) Xeon(R) CPU E5-2430L v2 @ 2.40GHz
RAM: 48GB
Networking: Dual 10Gb NIC
zpool: 4 mirror vdevs (x8 WD Red 4TB PMR Disks)

Code:
NAME                                     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
N40L                                    14.5T  6.90T  7.60T        -         -     6%    47%  1.00x  ONLINE  /mnt
  mirror                                3.62T  2.13T  1.50T        -         -    10%    58%
    gptid/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx     -      -      -        -         -      -      -
    gptid/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx      -      -      -        -         -      -      -
  mirror                                3.62T  2.14T  1.48T        -         -    14%    59%
    gptid/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx      -      -      -        -         -      -      -
    gptid/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx      -      -      -        -         -      -      -
  mirror                                3.62T  1.42T  2.21T        -         -     2%    39%
    gptid/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx     -      -      -        -         -      -      -
    gptid/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx      -      -      -        -         -      -      -
  mirror                                3.62T  1.22T  2.41T        -         -     1%    33%
    gptid/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx      -      -      -        -         -      -      -
    gptid/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx      -      -      -        -         -      -      -


I am looking for a way to test the performance of this pool in a reliable way. I've heard of DD tests, FIO and others but have yet to find a decent guide on them.

And realistically, what can be expected out of such a pool for read/writes?
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
FreeNAS has iperf and iOzone built in which may do what you need. Look in Section 26 of the docs for info.

Did you mean "PMR" to describe your WD Reds? Hopefully they are "CMR"...
 

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
FreeNAS has iperf and iOzone built in which may do what you need. Look in Section 26 of the docs for info.

Did you mean "PMR" to describe your WD Reds? Hopefully they are "CMR"...
Sorry I meant to say CMR.
I will look into the section of the manual you mentioned.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
And realistically, what can be expected out of such a pool for read/writes?
First thing to understand about a pool of mirrors is that what you should expect from it is more IOPS, not necessarily more throughput (although maybe similar in some conditions) compared to RAIDZ.

I assume this is what you're after.

What you need to understand about your setup in particular is this:

Your VDEVs seem to have been added to the pool over time, so your data isn't evenly distributed across them.

This will result in ZFS electing to put most of the new data on the newer (most empty) VDEVs, which will be a limiting factor to your pool IOPS.

If you want to see the performance (IOPS) of a 4 mirror pool, you'll need to clear the pool out (probably best to recreate it if you're clearing it anyway... use the same name) and put the data back from backup (which will then be evenly distributed across the VDEVs).

Depending on the workload you intend the pool to serve, you may want to consider a mirrored special/metadata VDEV of SSDs, which may also help overall pool performance by keping the metadata more quickly accessible (although you'd need to upgrade to TrueNAS CORE for that).
 

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
First thing to understand about a pool of mirrors is that what you should expect from it is more IOPS, not necessarily more throughput (although maybe similar in some conditions) compared to RAIDZ.

I assume this is what you're after.

What you need to understand about your setup in particular is this:

Your VDEVs seem to have been added to the pool over time, so your data isn't evenly distributed across them.

This will result in ZFS electing to put most of the new data on the newer (most empty) VDEVs, which will be a limiting factor to your pool IOPS.

If you want to see the performance (IOPS) of a 4 mirror pool, you'll need to clear the pool out (probably best to recreate it if you're clearing it anyway... use the same name) and put the data back from backup (which will then be evenly distributed across the VDEVs).

Depending on the workload you intend the pool to serve, you may want to consider a mirrored special/metadata VDEV of SSDs, which may also help overall pool performance by keping the metadata more quickly accessible (although you'd need to upgrade to TrueNAS CORE for that).
Thanks for this information. I had a feeling that was the case. Luckily I have another freenas host which I replicate most of my important datasets to as a backup. What is the best way for me to bring this data back when I recreate the pool? Replication?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
What is the best way for me to bring this data back when I recreate the pool? Replication?
That would be the first choice if you have it.
 

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
That would be the first choice if you have it.
And if you don't mind me asking, what would be the correct steps to destroy this pool and start over? I see the option to "Detach while marking the disks as new". I would assume this is it?

I should probably keep the share's configuration as an easier way to re-create the pool and continue where I left off?

And lastly, after the above, simply recreate the pool and start the replication?

Thank you for your help this far.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Obvious first step... make sure you have a copy of your data that you're happy is complete and valid.

what would be the correct steps to destroy this pool and start over? I see the option to "Detach while marking the disks as new". I would assume this is it?
Use Export/Disconnect and tick the box to wipe the disks.

I should probably keep the share's configuration as an easier way to re-create the pool and continue where I left off?
Yes, that's right.

after the above, simply recreate the pool and start the replication?
Depends how your pool was structured in terms of the dataset layout (in particular the pool root dataset), but in principle, yes.
 

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
Obvious first step... make sure you have a copy of your data that you're happy is complete and valid.


Use Export/Disconnect and tick the box to wipe the disks.


Yes, that's right.


Depends how your pool was structured in terms of the dataset layout (in particular the pool root dataset), but in principle, yes.
I went ahead and destroyed the pool but when I recreated it, I am only getting two mirror vdevs instead of 4. Is there a way to create 4 mirrors using the GUI?
 

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
Update, I think I figured it out:

Screen Shot 2021-06-16 at 8.08.43 PM.png


Is this correct?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Update, I think I figured it out:

View attachment 47771

Is this correct?
You've got it.

As far as performance, the question is "what workload will you be actually loading the array with?" It doesn't make sense to benchmark or tune for "large, non-overlapping, sequential I/O" when your actual workload will be "small random overwrites."
 

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
You've got it.

As far as performance, the question is "what workload will you be actually loading the array with?" It doesn't make sense to benchmark or tune for "large, non-overlapping, sequential I/O" when your actual workload will be "small random overwrites."
This pool is mainly going to be used as a repository for a Veeam backup server.

I am open to suggestions as I don’t have much experience with zfs. My goal is to maximize performance.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
My goal is to maximize performance
Because "performance" of storage can be measured at least 2 ways... IOPS and throughput... you won't necessarily be able to just have "performance" at a maximum level.

You may find that the requirements will be very different based on the connection method you use (iSCSI, NFS or SMB).

What you're set up for right now is block storage (so iSCSI or NFS from ESX, where IOPS will be important).

If you were doing large, sequential file transfers over SMB (without sync writes requested), then RAIDZx would be a potential advantage (with the potential for higher throughput in that scenario). Since it's a backup, you may be prepared to tolerate a higher level of risk (async writes and RAIDZ1), but plan it carefully and do your research to understand if that's a good match for what you require.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
This pool is mainly going to be used as a repository for a Veeam backup server.
Assuming you connect this as an SMB or NFS repository, I would suggest a RAIDZ2 pool configuration, with a dataset using a very large recordsize (eg: 512K or 1M) and enable ZSTD compression. Let Veeam handle any deduplication needs - don't try to run it on ZFS alone or "double-dip" by running it on both.

If you're using NFS, you should also set sync=disabled on the Veeam exports - since a backup repository doesn't strictly require that (a failed job can be re-run) this will mitigate the sync-write requirement of NFS, although using large records (512K/1M) will help the spinning disks respond to sync writes faster.
 
Top