What kind of TrueNAS operations benefit from high IOPS?SSD's have much higher IOPS than HDD's.
SSD's in RAIDZ'n' still have higher IOPS than mirrored HDD's and have better space efficiency than mirrors whilst maintaining parity
So if you need more IOPS than HDD's in any form SSD's are a valid use case
It would be important to consider how you are providing disks to the TrueNAS guest OS... Are you passing through the entire HBA?My TrueNAS is virtualized, and it's hypervisor (Unraid) is what handles media server duties and other more recreational functions.
Of course.It would be important to consider how you are providing disks to the TrueNAS guest OS... Are you passing through the entire HBA?
AppPool - 2 SSD's. Main use for this is ix-applications and docker config files. Keeps the docker stuff snappyWhat kind of TrueNAS operations benefit from high IOPS?
I notice you have all kinds of SSDs, both in your main storage pool, SSD pool, AppPool and a Scratch SSD. Are you able to further elaborate on how they're used beyond SLOG and L2ARC? Do you use just one Optane 900P as a slog device? Doesn't the TrueNAS documentation recommend using a mirror for SLOG?
Do L2ARC drives have to be MLC like the documentation seems to imply? Would the usual Samsung 870 EVO or Crucial MX500 not suffice?
My TrueNAS installation has an HBA card capable of connecting up to 8 SATA drives. Four of them will be 12TB Seagate Ironwolfs (haven't decided between striped mirrors or raidz1 for the config) but I still have four more SATA ports from the second SAS port to use. I think connecting them to SSDs would make more sense than more HDDs because I really don't need more large storage drives, but just looking for ideas on what to set up. There is a pretty good deal for 1TB Crucial MX500 SSDs on Amazon in my country, but from what I read on the TrueNAS documentation, consumer SATA SSDs don't seem like the best choice for zfs's unique caching features.
For what it's worth, my main TrueNAS use case is storing large 3D projects and assets, and accessing them over the network. Also continually archiving data for family and friends. My TrueNAS is virtualized, and it's hypervisor (Unraid) is what handles media server duties and other more recreational functions.
My server has only 32GB of ECC ram, and because I'm currently virtualizing TrueNAS, this of course means I'm assigning even less (8GB as of right now with 3x3TB Drives). I also do not have a 10GbE network setup. I'm still very new to TrueNAS and zfs and just trying to figure out exactly what works for the hardware I have.@dgrab how much memory do you have? Do you have 64 G or more and still experience ARC misses? Only then will you possibly benefit from L2ARC.
I use dedicated SSD pools for jails and VMs and spinning disk pools for SMB filesharing. Also special vdevs for metadata profit from using SSDs.
Does not matter. The point is, there is no need for greater size than 16GB.how large?
Aim for 16GB useful space. Any larger, typically doesn't matter, and requires really specific cases. One's it setup, you can track how little it actually does via some scripts floating around.should I be looking for a certain sized Optane?
yes.completely out of the question?
Yes, actually your performance will be worse with the same amount of protection.but is it worse than literally writing directly to an HDD data pool?
Don't recall if this particular model has Power-Loss-Protection which is the key feature.I could get an Optane P1600X
yes.Will any old consumer SSD be sufficient for a special vdev?
noOr is it a much better idea to buy one of those Intel high-endurance server SSDs?
no, not in the general usecase. However I think this might differ when playing with dedup tables - I'm not sure.Do special vdevs get heavy writes?
I use mine for small blocks. The beauty of sizing, is that it directly relates to your mix of files, and settings on each dataset.I read that it's good to aim for 0.3% of the storage pool size, but I'm guessing it wouldn't hurt to overprovision that, especially if I opt into storing small blocks?
Back in the day the goto LOG were SATA's.ZIL drive if I ever add one... except they don't connect over SATA
Will not make a notable difference. Any SSDs will be fast enough to make an improvement over not having a special vdev, to a HDD pool (granted data composition can draw advantage of them).With the special vdev SSDs I think I'd still rather use something decent for performance rather than the cheapest DRAMless rubbish WD Green or Kingston A400.
Yes. Provide enough redundancy. The sketchier units, add redundancy.Would it be worth considering used SSDs?
Good luck with getting the true story :DObviously I'd be checking with the seller first that they're in good health.
Fast enough for an improvement over no special vdev? I mean sure. DRAM would still provide just that extra bit of performance optimization and reliability though. DRAM SSDs with good controllers are more than affordable now, and have much better endurance than the cheap stuff.Will not make a notable difference. Any SSDs will be fast enough to make an improvement over not having a special vdev, to a HDD pool (granted data composition can draw advantage of them).
Yes. Provide enough redundancy. The sketchier units, add redundancy.
I've a clump of Samsung EVO 850's which have been in and around since new, that is potentially all the way back to 2014 when the model was released.
I don't trust them particularly much, as SSDs tend to die waaay more sudden than HDDs.
Therefore I use 3x.
Good luck with getting the true story :D
My favorite event on that topic, was a seller claiming all drives were 100% health, as checked with something like HD sentinel (can't remember, and don't I dont use windows), providing a series of screenshots. Sure, one of them claimed 100% health. Anotherone showed a raid card, and another one showed how there were <NO SMART DATA> at all reported through the controller.
IMO the most reasonable way is to get a few small, cheap SSDs and provide more redundancy. After all, ZFS main strength is turning cheap hardware into reliable storage solutions.
Will not make a notable difference. Any SSDs will be fast enough to make an improvement over not having a special vdev, to a HDD pool (granted data composition can draw advantage of them).
[…]
Interesting, so just running a special vdev at all entails a risk? A risk that wouldn't exist if you just keep all your zpool's metadata to the old data vdevs?Performancewise … true. But i would choose something with „real“ PLP and a decent controller.
I‘d stay away from SSDS with „accelerating“ DRAM caches, SLC driven „performance areas“ and dubious controller „background optimization“. If it gets interrupted …
You can‘t compensate that risk with any SLOG or UPS. If atomic writes are forcefully aborted, your metadata (and in the case of special vdev your pool) will be toast. ZFS won‘t „know“, since all that crap happens in the drive(s).
If it is business critical, I‘d buy enterprise/datacenter drives (with PLP) any time.
The first and foremost protection from power loss is an UPS, period.You can‘t compensate that risk with any SLOG or UPS.
Correct. But this remains true for a pool without special vdev.If atomic writes are forcefully aborted, your metadata (and in the case of special vdev your pool) will be toast.
Correct, but to avoid seeding unnecessary doubts and confuse OP or the argument too much -If it is business critical, I‘d buy enterprise/datacenter drives (with PLP) any time.
I wouldn't say so.Interesting, so just running a special vdev at all entails a risk? A risk that wouldn't exist if you just keep all your zpool's metadata to the old data vdevs?
Interesting, so just running a special vdev at all entails a risk? A risk that wouldn't exist if you just keep all your zpool's metadata to the old data vdevs?
Well. Yeah. But it's just hardware. Imagine, it's crashing from a dying PSU or a kernel crash or ... (fill in anything you could think of). If you don't care, fine. Do, whatever you want. I would have been glad, somebody told me to be careful, when I did my first ReiserFS-based file server.At least in my case, I'm a relatively casual home user. While I'm storing stuff I care about, I wouldn't say there's any "mission critical" data requiring constant uptime, and obviously I store backups. This is also why I'm quite comfortable virtualizing TrueNAS from my hypervisor, even though such practice is frowned upon by the zfs purists. I also live in an area with no natural disasters and no potential hazards interacting with the server.
[...]
External power loss. If your PSU dies, it's still power loss. "Nice UPS you got there."The first and foremost protection from power loss is an UPS, period.
[...]