Does 80% rule apply to R10 SSDs?

dpeley

Cadet
Joined
Jan 3, 2022
Messages
7
  • Motherboard make and model -- Dell R720 - ESX host
  • CPU make and model -- Dual Xeon E5-2690 - 4 vCPUs provisioned to TrueNAS
  • RAM quantity -- 192 gb, 100gb provisioned to TrueNAS
  • Hard drives, quantity, model numbers, and RAID configuration, including boot drives - for this question: 4x HPE DOPM3840S5xnNMRI 3.84 tb SAS SSDs in R10
  • Hard disk controllers - LSI 9285-8e
  • SLOG for this drive set - None
  • L2ARC for this drive set - None

I feel like the 80% rule has been discussed in depth, but for the life of me, I can't find anything discussing two specific points:
  • Does the 80% rule apply to flash drives?
    • I get that fragmentation increases with repeated use, but fragmentation doesn't impact flash drives the same. What kind of performance impact would I expect to see with SSDs?
  • Does the 80% rule apply in striped mirrors? If I set up the vDevs to be striped mirrors, does the ZFS architecture still exact its toll of a performance hit when a drive begins going over 80%? (For the purposes of this question, ignore the fact that I'm using SSDs)

The rest of this post is unimportant for the scope for the question, but in case those who want to answer are curious:
These drives will be serving as the file stores for a Nextcloud instance hosting multiple users, so I expect plenty of random & simultaneous reads and writes. The VM itself will live on a drive local to the ESX host while the repository of user data will be on these drives presented to the VM through ESX via iSCSI (which allows for external backup solutions, namely Veeam, to back the VM up as a whole, including the user data, more cleanly)

Currently my read/writes are showing up as this on the NextCloud datastore VM, however I'm well aware that the writes are probably inflated due to the ARC that I've got:
1668229686256.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hard disk controllers - LSI 9285-8e

This is not acceptable for use with TrueNAS. See


I feel like the 80% rule has been discussed in depth,

It has. But you may be having problems finding it because it's not really a light switch that gets flicked on and off. There's no dead man's zone at 80.0001%. You can look at point #6 in the article below to help understand it.

Does the 80% rule apply to flash drives?

Yes. But as I think you suspect, it's a bit different.

I get that fragmentation increases with repeated use, but fragmentation doesn't impact flash drives the same. What kind of performance impact would I expect to see with SSDs?

In any important sense, it does impact flash drives similarly. The difference with flash is that you don't necessarily incur the HDD seek penalty, but you are also likely to incur a write performance limit similar to SMR HDD's as you run a flash drive out of its free page pool due to overwriting. In a way that can be worse in certain circumstances.

Does the 80% rule apply in striped mirrors?

Yes, why would you think it wouldn't?

If I set up the vDevs to be striped mirrors, does the ZFS architecture still exact its toll of a performance hit when a drive begins going over 80%?

The ZFS architecture has nothing to do with "exact[ing] its toll". This isn't a ZFS thing. It's a classic compsci thing. You know, in the same way a hash table can be extremely wasteful of space but it gets you incredible performance. It's a performance tradeoff. You burn the memory for the hash and get the performance boost. Likewise, many things with ZFS. Providing a large pool of free space and a lot of ARC allows ZFS to more easily find contiguous free space, which means less seeks and less CPU consumption analyzing metadata in order to find free space. See especially #6 of


and if your next question is whether this implies that your question should really have used 50% as the cutoff, the answer is yes, probably. And no it isn't 50.00001% either. See the graph.

however I'm well aware that the writes are probably inflated due to the ARC that I've got:

ARC only indirectly impacts write speeds. If the pool has to be accessed to retrieve metadata to find free space, then ARC is helpful. Otherwise it really is not. Instead see point #11 of the linked article.

And if you didn't guess it, your use case is generally covered by the block storage article. This will hopefully help dispel other misconceptions that might pop up along the way.

Also also, wtf is an "R10 SSD". It's jarring and unpleasant to have to unpack your randomly picked abbreviations. Please review

 

dpeley

Cadet
Joined
Jan 3, 2022
Messages
7
Thank you for your very detailed response. I apologize for the lingo, as it's used so often in my circles, I didn't consider it to be foreign, but you have a point, and I will try to make sure future communication uses the proper terms.

My LSI 9285-8e is flashed into "IT-mode" so at least that, but I see the warnings against that, and will look into swapping out for a supported version when possible

For the rest of the information, you've definitely given me plenty of good information to dig into. Thanks again, have a great week!
 

dpeley

Cadet
Joined
Jan 3, 2022
Messages
7
It's not terribly cryptic, just short hand for Raid 10, solid state drive, or in TrueNas terms, Striped vDevs.
 

dpeley

Cadet
Joined
Jan 3, 2022
Messages
7
I'm in an MSP environment where every system must have some level of parity, and different parity levels and disk types are chosen for their application. Instead of naming our datastores something like "CloudStorage" or "Virtual Machine OS Drive" we name them something more static and informative similar to "R10-3tSSD" or similar, to denote that this datastore is backed by 3-terabyte SSDs in a Raid 10 layout. That way all future administrators can recognize the expected performance level at a glance, and more easily identify which drives are a part of which datastore when looking at individual drives.

However that's just one way of looking at it, and as jgreco rightly pointed out, has little use here.
 
Top