Questions regarding performance

f!ReW4LL

Dabbler
Joined
May 24, 2019
Messages
32
Hello people

I have tested some other software like OMV to see if I can benefit from them. Which unfortunately was not the case at all.
Now I would like to have a little more performance in my NAS (server).

Hardware:
Mobo:X570D4u-2l2t
Ram: ECC Ram Kingston 64GB soon to be upgraded to 128GB
4x8TB Ironwolfs
1x 500GB Firecuda 520 System
1x1000GB Firecuta 520 VM/Backups
CPU: AMD Ryzen 3700X

I always had a pool RaidZ1 configured.... so 3 data and 1 parity, unfortunately the performance was not very good.I noticed repeatedly like a pumping. Now I would like to have a little more performance and find that Stripe brings enormously more performance but unfortunately no security. Of course, the data is backed up 1x on the smaller server as well as on an external hard drive. But still I feel insecure with a Stripe.

Now I have come across the following 2x vdevs mirror, which should also give a performance boost. Is this correct? Here too, however, I notice a pumping problem, but the speeds are still reasonably good. Now I wonder, if I create a 3x vdev mirror, will the performance also increase further? I am planning to buy 2 more HDDs.

I use this mostly for plex and data storage.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
If you're measuring performance as IOPS, then more VDEVs (in your proposed case, mirrors) will deliver that.
I notice a pumping problem
It can certainly be the case that the way ZFS does transaction groups makes it look like the data is moving to the drives in waves.

Depending on your workload (maybe your testing is effectively only showing what happens with one client copying files over SMB) you may find different ways to smooth out those waves.
 

f!ReW4LL

Dabbler
Joined
May 24, 2019
Messages
32
If you're measuring performance as IOPS, then more VDEVs (in your proposed case, mirrors) will deliver that.

It can certainly be the case that the way ZFS does transaction groups makes it look like the data is moving to the drives in waves.

Depending on your workload (maybe your testing is effectively only showing what happens with one client copying files over SMB) you may find different ways to smooth out those waves.
Your right, i was connected over SMB. The transfers are going from 200-400mbs! So this means if i put in 2 more harddisks the speed ahould go up to 600mbs?!

Which coule be the best way to transfer data from one place to the other? Or to benchmark them?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
So this means if i put in 2 more harddisks the speed ahould go up to 600mbs?!
What I mentioned was IOPS, not throughput. IOPS increase with a linear improvement when you consider additional VDEVs (from the point of pool creation, not added as you go... re-balancing would be needed in that case)

If you want more throughput for a single session, you may get better results (at least with better economy of cost) by having RAIDZ2 (at least 5 drives wide) and/or a metadata VDEV with mirrored SSDs.

Which coule be the best way to transfer data from one place to the other? Or to benchmark them?
It depends on the data... Are you talking entire datasets or ZVOLs? Large files, small files, a mix? (tuning recordsize may already help with some of that ... https://openzfs.github.io/openzfs-docs/Performance and Tuning/Workload Tuning.html#dataset-recordsize)

Are you working with VM storage?

Benchmarking is an even tougher question as almost all the benchmarks I've seen just end up pointing out things that we already know... when you have a single client, things that work better with multiple clients look artificially bad, when your pool is low on IOPS capacity, you get bad performance on small files.

This is an interesting comparison between iSCSI and NFS

Also a good example of using a more balanced performance tool (although it's specifically measuring VM disk performance in this example)
 

f!ReW4LL

Dabbler
Joined
May 24, 2019
Messages
32
Hi Sretalla

Thank you very much for your time and detailed answer. I will also use it to get a bit smarter.

So for me it will be exclusively as a small data storage: backup of data from my or the family PC. It also serves to store videos for e.g. plex.

So a mix of small and big data. And I am talking about data sets and not zvol. This is all handled by Proxmox and Truenas is on a VM, the Sata controller is passed through for this.

I just created a new pool with RaidZ2 and true, see much more throughput for single big data. Unfortunately the SMB transfer was under the 100mbs but think it will settle down a bit over time.

I guess with RaidZ2 I should also be on the safe side in case one hard disk fails if not 2.......

Will a mirror stripe with 3 vdevs not be faster than RaidZ2?

Regards
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Will a mirror stripe with 3 vdevs not be faster than RaidZ2?
3 mirrors (hence 3 VDEVs) will certainly provide 2x more (or 3x in total) IOPS than a single RAIDZ2 (only 1 VDEV).

Generally speaking, each VDEV will only provide the IOPS of one of the member disks.

Mirrors are a little better than RAIDZ in general at handling IOPS as only one member disk needs to be involved in any read of a block compared to multiple disks for every read in RAIDZ.

IOPS isn't necessarily any indication of throughput (what most people would call speed), so it would be wrong to just call 3 Mirrored VDEVs "faster".

Throughput will depend heavily on the contents and method of the transfer in addition to the ability of the pool to handle the required IOPS.
 

f!ReW4LL

Dabbler
Joined
May 24, 2019
Messages
32
Ich bemühe mich dir auf English zu Antworten obwohl du auch aus der Schweiz kommst xD

Ich werde mal beide testen und schaue was bessere Performance für mich bringt. Muss nur noch herausfinden ob ich dies über mein Sata Controller betreiben kann oder ein HBA dazukaufen müsste :(
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Ich bemühe mich dir auf English zu Antworten obwohl du auch aus der Schweiz kommst xD
I'm from the southern side of the Röschtigraben, so would rely on Google Translate for German.

I'm also originally (13 years ago) from Australia, so a native speaker of English (and mostly working in that language too).

Usually an onboard SATA controller (on a modern board, which seems to be the case for you) will be enough for HDDs, but you may see a benefit from running via HBA if you wanted to run an all-flash (SSD) pool.

It seems you have access to 12 SATA connections already on the board, so you won't need to consider an HBA to get more ports.
 

Kailee71

Contributor
Joined
Jul 8, 2018
Messages
110
Please please please - do read up on @jgreco's excellent primers for TrueNAS hardware (both for hba's and 10gbe). Onboard sata is really not the way to go unless you have no choice. HBAs are dirtcheap so there's no reason not to get one (unless you've got no PCIe lanes available). This is not even just a performance issue but a data security issue; hba drivers have an insane amount of proven uptime. So if you value your data 1) read up on your hardware and 2) follow the advice you find as much as possible.

Vdev architecture is then the next topic to cover. As pointed out already, striped mirrors are often the go-to layout when max throughput is targeted at the cost of capacity. However, there are other advantages also: resilvering of simple mirrors is *much* quicker, reducing the likelihood of data meltdown while a vdev is rebuilding. Then there's the much easier upgrade path by replacing individual mirror devs - it's easy to do 2 resilvers and wind up with a bigger vdev without any downtime.

Hope this helps,

Kai.
 

f!ReW4LL

Dabbler
Joined
May 24, 2019
Messages
32
Hello Kailee

thanks for your contribution.

Can you recommend me some hbas? I had a LSI 9300-i8 but i changed some settings and didn't need it anymore :( but right now it's very expensive for me ^^ I will get one as soon as i have enough money :(

Yes I have 3vdevs with 2x8TB hard drives. It works pretty well..... i get speeds over 300mbs when copying from one directory to another.

But what is strange: I set up a Truenas Core VM and connected it to the other Truenas server so I can transfer data from the main server to the backup server via RSync.

Via Truenas Core, the max transfer rate are at 50mbs...... But with Truenas Scale, I get up to 800mbs so 10G line.

Now what is wrong with the Core? Because all the features of Scale I do not need, besides Scale is still in beta and is a little längserm according to various videos in Youtube.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Now what is wrong with the Core?

It relies on upstream FreeBSD, which out of the box, has very poor tuning for 10G, and even 1G networking. Try applying these tunables:

 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Onboard sata is really not the way to go unless you have no choice.
I have never heard that before. Can you point me to some documentation what the issues with that approach are?

Thanks.
 

f!ReW4LL

Dabbler
Joined
May 24, 2019
Messages
32
It relies on upstream FreeBSD, which out of the box, has very poor tuning for 10G, and even 1G networking. Try applying these tunables:

Hi Samuel

Did give this a try, whitout success :/
 

Kailee71

Contributor
Joined
Jul 8, 2018
Messages
110
I have never heard that before. Can you point me to some documentation what the issues with that approach are?

Thanks.
Yah you're right. I worded too strongly. More appropriate would have been "if possible, use LSI-based hardware for most-proven results". In fact, onboard sata is preferable to crappy pcie sata cards. They're often limited by number of sata ports, and sometimes even pcie lanes used to connect them. On some of my older SuperMicro boards the sata interfaces were even connected through pcie bridges, iirc, to free up more lanes for the onboard sas controllers and ethernet.

Mea culpa.
 
Top