Striped zpool with multiple single drive vdevs or large stripe vdev

marcouan

Dabbler
Joined
Dec 25, 2021
Messages
10
Hello

From performance point of view what is better:
1 vdev with 8 SSDs stripe
8 vdevs with 1 SSD each

Thanks
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
1 vdev with 8 SSDs stripe
8 vdevs with 1 SSD each
Those are the same. It doesn't matter how you put the pool together, you would come out with the same result... 8 single-disk VDEVs. (and no redundancy... I guess you knew that).
 

marcouan

Dabbler
Joined
Dec 25, 2021
Messages
10
Hello

Thanks for your reply.
About redundancy: yes just playing around, trying to saturate dual path 10gbs iscsi links. Max I can achieve is 1400MB/s

About the vdevs.
Well from the gui you can either have 1 vdev with 8 disks stripped or 8 vdevs with single disks. What i noticed is with 8 single drive vdevs my cpu utilization increase alot but performance is pretty much the same.
Little worse with multiple vdevs but i guess this is due to higher cpu util on TrueNas

Wondering if someone can explain the higher cpu util with 8 single drive vdevs
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Would you care to share the zpool status -v and zpool list -v for both types?
 

marcouan

Dabbler
Joined
Dec 25, 2021
Messages
10
Thanks for ur help sretalla


For 1 vdev with 8 ssd
pool: SSD
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
SSD ONLINE 0 0 0
gptid/5b972916-8be5-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/5ba9eae9-8be5-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/5bada5c2-8be5-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/5bbda648-8be5-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/5bb8af19-8be5-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/5bc7c2e1-8be5-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/5bcbcd86-8be5-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/5bd342d6-8be5-11ec-b2c3-843497f96d58 ONLINE 0 0 0

errors: No known data errors

For 8vdevs with 1ssd:
pool: SSD
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
SSD ONLINE 0 0 0
gptid/58af32f7-8ccf-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/58c358e1-8ccf-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/58ca1cc7-8ccf-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/58a328af-8ccf-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/58cedd44-8ccf-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/58bd00ec-8ccf-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/58c98717-8ccf-11ec-b2c3-843497f96d58 ONLINE 0 0 0
gptid/589ed254-8ccf-11ec-b2c3-843497f96d58 ONLINE 0 0 0

errors: No known data errors
SSD 3.53T 648K 3.53T - - 0% 0% 1.00x ONLINE /mnt
gptid/58af32f7-8ccf-11ec-b2c3-843497f96d58 444G 124K 444G - - 0% 0.00% - ONLINE
gptid/58c358e1-8ccf-11ec-b2c3-843497f96d58 444G 48K 444G - - 0% 0.00% - ONLINE
gptid/58ca1cc7-8ccf-11ec-b2c3-843497f96d58 444G 72K 444G - - 0% 0.00% - ONLINE
gptid/58a328af-8ccf-11ec-b2c3-843497f96d58 444G 108K 444G - - 0% 0.00% - ONLINE
gptid/58cedd44-8ccf-11ec-b2c3-843497f96d58 460G 76K 460G - - 0% 0.00% - ONLINE
gptid/58bd00ec-8ccf-11ec-b2c3-843497f96d58 460G 36K 460G - - 0% 0.00% - ONLINE
gptid/58c98717-8ccf-11ec-b2c3-843497f96d58 460G 60K 460G - - 0% 0.00% - ONLINE
gptid/589ed254-8ccf-11ec-b2c3-843497f96d58 460G 124K 460G - - 0% 0.00% - ONLINE

So ur right same thing.
Run the test again and turns out cpu load is the same. I was running wrong test before.

Any idea why i'm stack to 1400MB/s on my iscsi proxmox lun?
2x 10Gbs sfp+ 2xVLANs Proxmox multipath rr.
I can saturate both 10Gbs links with iperf.
using fio with the best possible values , queue depths, bs , no of jobs.
Never manage to pass 1500MB/s

When running fio locally on pool i get RAM speeds so i should be ok there.

Thanks again
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Maybe try setting sync=disabled on the zvol you're using and see if that changes anything.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Any idea why i'm stack to 1400MB/s on my iscsi proxmox lun?
Better but still not there:
READ: bw=1497MiB/s (1570MB/s)
Chances are there are several factors to the performance that might still be having an influence... what you've ruled out are pool performance and sync writes... what's left are things like recordsize and threading.

If you look at this:

You will notice that the default zvol volblocksize is 8K... maybe there's a better match for your workload with a different number.

This section suggest for VMs that might be 4K:

Also, depending on how you're running fio, you may be bottlenecking on the CPU to generate the workload, so have an artificial limit there too, so using real-world tests might be better.
 
Top