Read Speeds 1/10th the speed of Write Speeds

AlaskaDTD

Cadet
Joined
Jan 23, 2023
Messages
2
Hello all, I am brand new to using/configuring iSCSi. I've read many threads over the past 2 weeks addressing this same exact issue (thanks @HoneyBadger and @jgreco) but I can't seem to fix my issue. I will attach a few images to show my setup and I will list my specs and environment below. Any information is greatly appreciated.

TrueNAS Specs
  • CPU - Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz
  • RAM - 256GB DDR4 2166Mhz ECC
  • HDD - This will be easier to convey with a picture. I only put the drives in separate mirrors because the internet told me to. Maybe it's dumb idk. 2 ssd for cache and 1 ssd for log.

ESXi Specs
  • CPU - Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
  • RAM - 64GB DDR4 2166Mhz ECC

Networking - x4 10Gbs uplinks using MPIO. I've run iperf on both ESXi and TrueNas and I get 10Gbs on all 4 uplinks. I don't think network is an issue here.

TrueNAS Settings
  • Pool Status
  • Zvol
  • Portals (You'll notice I only have 3/4 portals configured. This is just because I haven't felt like rebooting TrueNAS after configuring that interface. Still, 30Gbs should be good enough for testing purposes before I put it into production.)
  • Initiators
  • Targets
  • Extents
My results using crystaldiskbenchmark on a windows server 2019 VM in ESXi

Obviously there is a lot of ground to cover and I'm sure I missed a few details but I am open to any and all help/constructive criticism. Any help is greatly appreciated :smile: .
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What do you perceive your problem to be? Aside from the subject line of the thread, there's no clue.

Writes, especially short burst writes, are expected to go very fast because they go directly into the write cache (which is your system RAM) and you can pile up potentially several/many gigabytes of data to write before the write throttle figures out what your pool can sustain. You can turn on sync writes for everything to get a more realistic idea of what your pool is able to sustain. It won't be smoking hot with only a handful of vdevs. Looks like you have four. If you manage to get 150MBytes/sec sequential activity to all of them, then you are probably capped at 600MBytes/sec or thereabouts. Just expectation level-setting.

Reads, on the other hand, can come from ARC, L2ARC, or the pool. The pool, being built of 8 drives, if we again assume 150MBytes/sec, then that's 1200MBytes/sec. The ARC and L2ARC need to warm up to perform well, and need relevant data (your working set) loaded in them. Additionally, make sure you tune your L2ARC eviction parameters; these default to very small values, which will make the L2ARC take a long time to warm up.

Also remember that the MBytes/sec numbers I've quoted are extremely optimistic. ZFS, being a copy-on-write filesystem, only does a good job of sequential HDD access if there's gobs of free space available when writing. Things get a lot slower when the disks fill. The speeds you get at 10% occupied can feel very much like SSD speeds.
 

AlaskaDTD

Cadet
Joined
Jan 23, 2023
Messages
2
What do you perceive your problem to be? Aside from the subject line of the thread, there's no clue.

Writes, especially short burst writes, are expected to go very fast because they go directly into the write cache (which is your system RAM) and you can pile up potentially several/many gigabytes of data to write before the write throttle figures out what your pool can sustain. You can turn on sync writes for everything to get a more realistic idea of what your pool is able to sustain. It won't be smoking hot with only a handful of vdevs. Looks like you have four. If you manage to get 150MBytes/sec sequential activity to all of them, then you are probably capped at 600MBytes/sec or thereabouts. Just expectation level-setting.

Reads, on the other hand, can come from ARC, L2ARC, or the pool. The pool, being built of 8 drives, if we again assume 150MBytes/sec, then that's 1200MBytes/sec. The ARC and L2ARC need to warm up to perform well, and need relevant data (your working set) loaded in them. Additionally, make sure you tune your L2ARC eviction parameters; these default to very small values, which will make the L2ARC take a long time to warm up.

Also remember that the MBytes/sec numbers I've quoted are extremely optimistic. ZFS, being a copy-on-write filesystem, only does a good job of sequential HDD access if there's gobs of free space available when writing. Things get a lot slower when the disks fill. The speeds you get at 10% occupied can feel very much like SSD speeds.
Sorry, I am realizing now that I wasn't very clear on my problem. Let me tell you what my end goal is, and what I think may be an issue. I have a vSphere environment with 3 beefy hosts. Each host handles its own compute and storage. Each host has a 10Gb uplink to it. I want to use an ISCSi target to hold all data so the host's can just worry about compute. This would enable me to service the hosts and ESXi significantly quicker and during operating hours, as I don't have to use vMotion to move VM's from one host to another.

Perceived issue: After getting everything set up in a test environment, after running crystaldiskmark on a Windows Server 2019 VM I feel as if read speeds should be quicker. Again, I am very new to both ISCSi and TrueNAS. I am 1st year sysadmin so it's very possible I'm just misinterpreting my data. Let me ask a few more specific questions because I feel as if I was vague in my original post.

  • If I have a four 10GB/s uplinks configured for MPiO shouldn't I be getting up to 40GB/s in my current configuration? (on crystaldiskmark)
  • Did I do the right thing by setting up my drives the way I have them for maximum performance? At this stage, capacity isn't a concern because I can always expand if needed.
  • I tried Googling ARC and I am having a hard time wrapping my head around it. If there is a way to summarize what ARC is and how it works, how would you do it? How do I tune ARC? I can't find anything in TrueNAS but maybe I am glossing over it. Do you have any resources I could read up on?
  • Is there anything I can do to squeeze out more performance from my setup, aside from tuning? Do you need more information on my setup?
  • Finally, in a last ditch effort, is there a way I can pay TrueNAS money dollars for engineering hours to assist with setup?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
@AlaskaDTD, please post the images directly to the forum. We dislike external image hosts for a number of reasons, including:
  • They're often flaky
  • They're often shady
  • They're not very reliable, especially over the long-term
  • They require extra steps from either the poster or the reader
  • It would probably reflect poorly on iX if they were stingy about a couple of megabtyes per day of images on their forums at the same time they try to sell pricy storage solutions.
If I have a four 10GB/s uplinks configured for MPiO shouldn't I be getting up to 40GB/s in my current configuration? (on crystaldiskmark)
That's optimistic, but I'll leave it for someone else to address, since there's plenty of opportunity to improve things beyond the current read speeds without hitting 10Gb/s.
Did I do the right thing by setting up my drives the way I have them for maximum performance? At this stage, capacity isn't a concern because I can always expand if needed.
The main part of the pool looks sane. I question the specifics of the L2ARC and SLOG devices:
  • Two disks as L2ARC sounds like a lot, even with 256GB of DRAM.
  • I also raise an eyebrow at the "da" disk labels - it's unusual to see SATA or SAS disks in the L2ARC role in 2023.
  • Doubly so for SLOG. Of course, that would affect reads, not writes, but NVMe has been the default option for quite a while now, especially as fewer and fewer SATA devices cater to performance.
  • Generally speaking, there's very little information here on:
    • What disks are you using?
    • How are they connected?
I tried Googling ARC and I am having a hard time wrapping my head around it. If there is a way to summarize what ARC is and how it works, how would you do it? How do I tune ARC? I can't find anything in TrueNAS but maybe I am glossing over it. Do you have any resources I could read up on?
Hard to search for in a vacuum, with everything from dodgy real estate agencies to the Argonaut RISC Core microarchitecture that resulted from the experiences with the SuperFX coprocessor for the SNES thrown into one pot.
Fortunately, the Resources section is your friend:
Finally, in a last ditch effort, is there a way I can pay TrueNAS money dollars for engineering hours to assist with setup?
"Assist with setup" is mostly a no, but iXsystems sells complete systems, including with white glove support.
 
Top