slog drive? sata ok?

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
so i read redundancy is needed but that becomes a big complexity for a diy setup. is it true that each pool needs its own dedicated drive? ive got 6 sata ports open, have 2 pools now, and plan to add another two here over the year. each one with 6 drives.

id think a sata drive with dram would be good for a zil slog? if i do need 1 per pool then sata would make sense. i have battery backup and solar with its own battery backup so power loss is not going to happen.

i have a x1 pcie slot open that i can use for nvme expansion. i was thinking of adding a cache drive as i have more TB of storage as my board supports in ram and i read i should have a 1 to 1 gb to tb ratio? i cant maintain that.

my goal is to be able to saturate my 10gb network while moving large files around. if i could boost performance on small files that would be cool too but mostly its hte large ones that take the time to xfer.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
I'm pretty sure you don't get what a SLOG is from your post above (which doesn't give the hardware details required by the forum rules in order to provide any useful advice).

If you're just copying files around with SMB, you're possibly not doing sync writes at all, which means a SLOG isn't even involved.

 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
so i read redundancy is needed but that becomes a big complexity for a diy setup. is it true that each pool needs its own dedicated drive? ive got 6 sata ports open, have 2 pools now, and plan to add another two here over the year. each one with 6 drives.

In general, you probably only want multiple pools if you have a very large system (many dozens of drives) or you need different IOPS models (RAIDZ is good at one kind of thing, mirrors are good at another).

Redundancy is just something that you design in from the beginning. It's like taxes. You're going to pay them, or someday there's going to be a problem. A vdev is the basic component of a pool; see the Intro to ZFS document linked here for more information. Each vdev should have its own redundancy. For a mirror vdev, this means a second (and possibly a third) drive. If you have a 24-bay chassis, you might configure this as 12 two-way mirrors or even 8 three-way mirrors if the data was particularly valuable. For a RAIDZ vdev, this means adding one, two, or three parity drives. A common high-resiliency design in a 24-bay chassis would be two 11-drive RAIDZ3 vdevs, which gives you 8 data drives per vdev and 16 total in the pool. This lets you also incorporate two warm spares. With 16TB drives, this would yield 256TB of pool space or about 200TB of usable space. That's down from 384TB of raw space, but so what? You have to decide which is more important to you -- your data or cheaping out on the hardware. I did pick a bit of a dramatic example but the flip side here is that this pool design is very good at protecting large sequential files.

id think a sata drive with dram would be good for a zil slog?

You'd think, but you'd be wrong. The purpose of a SLOG is to guarantee the survival of sync writes (a POSIX requirement in certain environments) but this isn't always necessary -- or I'd even say not usually necessary. Basic SLOG requires an SSD with proven power loss protection. If you're not going to do that, just don't bother with the SLOG.

if i do need 1 per pool then sata would make sense. i have battery backup and solar with its own battery backup so power loss is not going to happen.

Power loss is only one possible cause. Other examples would include system panicks or wedging. The reliability of your power isn't really a consideration. You need a PLP-protected SSD at a minimum for a SLOG device, otherwise just don't bother with the SLOG.

i have a x1 pcie slot open that i can use for nvme expansion. i was thinking of adding a cache drive as i have more TB of storage as my board supports in ram and i read i should have a 1 to 1 gb to tb ratio? i cant maintain that.

You seem to be mixing a few things up here.

It is recommended that you have 1GB of ARC per TB of storage as a general goal. Shorting your system of ARC will negatively impact performance. The larger your pool is, the more metadata ZFS has to keep track of, and if it cannot keep it all in ARC, then it has to be pulling it off of disk. This is very inconvenient if you would like speedy writes, because ZFS will bog down looking for the large contiguous ranges of free space that make it work really well.

Those of us who virtualize TrueNAS on a hypervisor typically "tune" our actual RAM based on observed performance. For example, it turns out that our 140TB office archival filer here is just fine most of the time on 32GB of RAM, because it does very little. If I double or quadruple the RAM, it definitely becomes faster, but RAM's expensive. If it were a busy fileserver, it might very well need a lot more RAM. The important bit to remember is that ZFS may need something in the neighborhood of 1GB per TB to do a pool import, and if you don't have it, it will then start swapping (which is what the 2GB swap partition on every data disk is all about). Having insufficient memory plus swap may leave a pool unimportable. So we all like to shoot for some reasonable amount of RAM. I sleep very well having only 32GB at the VM hosting our office filer, because I know that I can throw up to 512GB of RAM at it in mere seconds if there's some sort of crisis.

i cant maintain that.

Not to put too fine a point on it, but your inability to maintain that is really not ZFS's problem. These things are a matter of the hardware choices you make. You've already made it clear you're using some suboptimal desktop or gamer platform. It is not the fault of ZFS if your hardware is gimpy. I can easily throw together a system capable of 4TB of RAM (X11DPI class board) and tons of I/O, up to several petabytes of storage with a few JBOD's. ZFS has always been a filesystem that trades system resources such as compute and memory for performance.
 

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
A lot to go over here. The main reason why I chose this hardware was that I was looking at a sysology setup. The hardware was rather skimpy and the cost of it, figured a DIY job would be more bang for the buck. And so far it has been. I am still out ahead monetarily building my own versus buying one of those pre-builds and I have far more expandability.

I understand there are server platforms, the reason I went with a 12th gen Intel was for QuickSync transcoding on board. Also the power usage was nice, I try to limit how much power I use just because of the solar setup and everything.

I currently have two open spots for RAM, obviously being a gaming platform, only four sticks. I can put two 32 gig sticks in there but I don't think it supports 64 per stick. Either way, I can max out around 96 gigs of RAM without swapping my current two sticks. So that might be my next upgrade path.

When I started this didn't really know what was involved. But wanted the learning experience. So I selected six drives per pool. It had been recommended to keep the size of the drives smaller, so as to minimize the amount of data loss on any one drive. I'm using a Z1. As such I picked up a bunch of relatively inexpensive, refurbished, 10 TB drives. I made two pools for 12 drives. I'm now looking to expand that into a third pool.

The other advantage I saw to doing a six drive pool was the cost of expansion was less. And my understanding is that I can only upgrade storage space by using another six 10 TB drives unless I want to rebuild the entire array.

I'm not sure where I'm going to end up in storage space amounts but, my best guess is somewhere around 300 terabytes. I bought two of the SAS controller cards to that end.

It sounds to me like, a large cache drive probably isn't needed but, perhaps I should look at picking up an optane drive for metadata?

At the end of the day, the other side of this is just learning things. So even if things aren't necessarily necessary, it's still fun to play around with
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
It had been recommended to keep the size of the drives smaller, so as to minimize the amount of data loss on any one drive
That might be true of UNRAID, but is not at all true of ZFS... you lose everything if you lose more than your allowed redundant VDEV members (for Z1, that's 2 or more disks lost = pool death).

It sounds to me like, a large cache drive probably isn't needed but, perhaps I should look at picking up an optane drive for metadata?
A Metadata VDEV will need to be as redundant as your data VDEV (or you risk losing all data when it fails as the pool requires all VDEVs be present and sufficiently healthy in order to import).
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
What hardware - all you have done is make vague references to some hardware.
Details matter. What hardware are you looking at - and what is your use case?
 

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
What hardware - all you have done is make vague references to some hardware.
Details matter. What hardware are you looking at - and what is your use case?
b660 pro board.
32 gigs ram ddr 4
12 10tb platters in 2 groups
lsi hba with a couple expanders in the mail
12th gen i3
open air case (test bench style) with 3d printed HDD racks next to it.
750w psu
256gb nvme for os
10gb nic in the works
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
What LSI HBA, which expanders?
Which 10Gb NIC.

And Use Case

This is like pulling teeth
 
Last edited:

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
LSI 9223 8i with
IBM 46m0997 expander
Nic is Mellanox MCX311A-XCAT CX311A

I don't have all these parts yet, some of them are still in the mail.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Make sure that LSI card is flashed with the right firmware. By default it seems to be a RAID card
 

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
i have that card and its working as of now. i get the expanders in a couple weeks.

i think what i was looking for was a special vdev for meta data. ive been reading up on it at level 1 tech. it sounds cool. maybe two 1tb nvme's for small files and meta? then i need to figure out how to make plex store meta there too or maybe plex meta is small enough that it will auto move to there. still learning
 
Top