Are SSDs overkill for a 1gbe network?

Hinterwaeldler

Dabbler
Joined
Sep 13, 2021
Messages
11
I have build a truenas box for my homelab rack, and I'm in the process of selecting the harddisks for it. While doing research for this I found that quite a lot of my assumptions about hard disks and NAS operation were shattered (like how you can add more disks in the future and how cache disks work).

Since my intuition was such a bad guide, I went back on one of my oldest believes, namely that HDDs are simply slow (I have to admit that I didn't touch a hdd outside of a NAS for 10 years). So I uploaded a large file to my current QNAP box, clocking in at 90 mB/s, and put two spare ssds in the truenas box and repeated the experiment there. The SSDs were barely better at 100 mB/s. I run a 1gb ethernet, which should max out at 125 mB/s, and with some protocol overhead 100 mB/s sounds reasonable.

Have HDDs really got to the point where they can saturate 1gbe? Is there actually a point in investing in SSDs for a NAS (even just for cache or slog) unless I also upgrade to 10 gbe?
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
It all depends on the usecase - random iops and you will kill the HDDs, and the SSDs will most likely fly (unless they're really bad).
For single stream sequential access HDDs are still quite good.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I have build a truenas box for my homelab rack, and I'm in the process of selecting the harddisks for it. While doing research for this I found that quite a lot of my assumptions about hard disks and NAS operation were shattered (like how you can add more disks in the future and how cache disks work).

Since my intuition was such a bad guide, I went back on one of my oldest believes, namely that HDDs are simply slow (I have to admit that I didn't touch a hdd outside of a NAS for 10 years). So I uploaded a large file to my current QNAP box, clocking in at 90 mB/s, and put two spare ssds in the truenas box and repeated the experiment there. The SSDs were barely better at 100 mB/s. I run a 1gb ethernet, which should max out at 125 mB/s, and with some protocol overhead 100 mB/s sounds reasonable.

Have HDDs really got to the point where they can saturate 1gbe? Is there actually a point in investing in SSDs for a NAS (even just for cache or slog) unless I also upgrade to 10 gbe?

A modern HDD can manage 200-250MBytes/sec, or roughly 2Gbps, on sequential read and write activities. On random read and write activities, if we allow for a generous (and unrealistically high) 300 IOPS, and pessimistic behaviour of seeking for every 512 byte sector, a HDD could easily have throughput of less than 154KBytes/sec, or about 1.2Mbits/sec. Real world performance should be in between those two goalposts somewhere.

As @c77dk says, it will depend on the use case.
 

Hinterwaeldler

Dabbler
Joined
Sep 13, 2021
Messages
11
Thanks for your replies, I was not really thinking about random access. The main use case for my NAS is to swallow backups of virtual disks from proxmox, so I should be fine with a few HDDs.
 

Hinterwaeldler

Dabbler
Joined
Sep 13, 2021
Messages
11
Ah, I think I get it now. I was researching SSD caches because I wanted to speed up the process of copying large backups, and I was surprised that I did not find an option in truenas for a write cache that put it on an ssd first and on the hdd later like qnap does. But hdds are already good enough for that. So instead of a write cache I will use the ssds as a metadata cache, and speed up things like directory browsing at which hdds suck because it's more random.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ZFS has write caching. It's just done in main memory, and is generally more massive than anything else out there, because on a 64GB system, you might easily be using 8GB for "write cache". There is no point to staging to an SSD. Either your pool can keep up with the demanded write rate, or eventually things have to slow down to catch up. With an SSD, this would be when the SSD was full, but staging everything thru an SSD involves a lot of wear and tear on the SSD in the meantime. With ZFS, it does nothing so stupid, because main memory has ~~infinite endurance, and ZFS instead works to throttle the speed of write activity if the sustained write level exceeds what the pool is capable of, rather than letting a client slam into a "the write cache is full" situation and hard-stalling.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
So instead of a write cache I will use the ssds as a metadata cache, and speed up things like directory browsing
Beware what you mean here!
If you think of a (persitsent) metadata-only L2ARC you're in the clear because the L2ARC only hold copies of the metadata.
If you think of a special vdev, beware that these are part and parcel of the pool and require redundancy because there is no copy of the metadata on HDD. The loss of a single, non-redundant, SSD acting as special vdev would mean the loss of the entire pool.
 

Hinterwaeldler

Dabbler
Joined
Sep 13, 2021
Messages
11
Beware what you mean here!
If you think of a (persitsent) metadata-only L2ARC you're in the clear because the L2ARC only hold copies of the metadata.
If you think of a special vdev, beware that these are part and parcel of the pool and require redundancy because there is no copy of the metadata on HDD. The loss of a single, non-redundant, SSD acting as special vdev would mean the loss of the entire pool.

I was planning two mirrored WD RED 500GB SSDs as a metadata vdev for a pool with 18TB.
 

Hinterwaeldler

Dabbler
Joined
Sep 13, 2021
Messages
11
ZFS has write caching. It's just done in main memory, and is generally more massive than anything else out there, because on a 64GB system, you might easily be using 8GB for "write cache". There is no point to staging to an SSD. Either your pool can keep up with the demanded write rate, or eventually things have to slow down to catch up. With an SSD, this would be when the SSD was full, but staging everything thru an SSD involves a lot of wear and tear on the SSD in the meantime. With ZFS, it does nothing so stupid, because main memory has ~~infinite endurance, and ZFS instead works to throttle the speed of write activity if the sustained write level exceeds what the pool is capable of, rather than letting a client slam into a "the write cache is full" situation and hard-stalling.

You just got me looking for more RAM before I remembered that I was already operating at the limit of my network capacity ...
 
Top