Recommendations requested with a few requirements

safado

Cadet
Joined
Dec 2, 2022
Messages
6
I’m looking to build a TrueNAS Scale system and have been furiously reading and learning what I can.

I have one really true requirement which is that I can’t add another rackmount to my lab or else I might get killed by my spouse.

So I’m looking at and settled on a Fractal Design Node 804. This means a requirement of a MicroATX motherboard. Those are the only set in stone requirements at this point.

I know how important RAM is to TrueNAS and settling on 64GB max memory of many available X11 platforms is a non-starter. I want to be able to have at least 128GB. ECC is required.

10Gb connectivity is must but I have various Intel and mellanox cards along with HBA’s that can serve for the storage.

Should I be considering a SOC board or bite the bullet and look to an X12 with an Xeon E-2300 v5/6 platform??

I appreciate any and all feedback. My primary use case involves high performance storage for iSCSI and NFS data store virtualization.
 

MisterE2002

Patron
Joined
Sep 5, 2015
Messages
211
at least the X11SCL-F can drive 128gb just fine (all the coffee lake boards?). Are you gonna use the memory for something useful though? Only for ZFS is overkill.

Think about your disk setup for 10Gb. Spinning rust is the slowest part in the machine.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Well... The new SOC/X12 are tempting, but are new: who knows if they have flaws. They seem to be quite expensive as well.
@MisterE2002 suggestion is solid. You can go for a trustworthy motherboard that can be (depending on your market) easily found used.
Its only flaw is lack of nvme for L2ARC and SLOG at the same time (which depending on how high performance your system is could be something to consider).
 

safado

Cadet
Joined
Dec 2, 2022
Messages
6
at least the X11SCL-F can drive 128gb just fine (all the coffee lake boards?). Are you gonna use the memory for something useful though? Only for ZFS is overkill.

Think about your disk setup for 10Gb. Spinning rust is the slowest part in the machine.

I’ve read more memory is better for ZFS. I’ll definitely explore containers and VM’s. Mostly I absolutely hate being limited by a platform selection. It seems that 64GB in the current age of server platforms is extremely limiting.
 

safado

Cadet
Joined
Dec 2, 2022
Messages
6
Well... The new SOC/X12 are tempting, but are new: who knows if they have flaws. They seem to be quite expensive as well.
@MisterE2002 suggestion is solid. You can go for a trustworthy motherboard that can be (depending on your market) easily found used.
Its only flaw is lack of nvme for L2ARC and SLOG at the same time (which depending on how high performance your system is could be something to consider).
I could use a PCIe card for the NVME assuming it supported bifurcation and add the drives that way. Right??
 

Torrone

Dabbler
Joined
Nov 15, 2022
Messages
41
bite the bullet and look to an X12 with an Xeon E-2300 v5/6 platform??
I made this choice (X12STL-F), especially because in EU the X12 are almost at the same price as the X11 currently (even taking into account the second hand market).
On the other hand, with this socket, you are obliged to take a Xeon E-23xx and they are rather expensive at the moment.
The X11 leaves the choice to use CPUs from the Core or even Xeon range that are less expensive today.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The new SOC/X12 are tempting, but are new: who knows if they have flaws.

X12 is not new. Rocket Lake was launched Q3'21 and isn't lighting up the Internet with gripes about how it doesn't work; this is quite different from Alder Lake/X13 with the poorly supported E-cores and P-cores business, which could well be viewed as flaws from a certain perspective.

at least the X11SCL-F can drive 128gb just fine (all the coffee lake boards?). Are you gonna use the memory for something useful though? Only for ZFS is overkill.

128GB is in no way "overkill" for a ZFS system serving block storage. We recommend a minimum of 64GB, and lots of people have 256GB, 512GB, or even more. Please refer to


I could use a PCIe card for the NVME assuming it supported bifurcation and add the drives that way. Right??

You would need to be careful about this. PCIe cards for NVMe that hold M.2 drives are probably useless for this unless you have a datacenter grade SSD with PLP, or an M.2 Optane or something like that.
 

safado

Cadet
Joined
Dec 2, 2022
Messages
6
I made this choice (X12STL-F), especially because in EU the X12 are almost at the same price as the X11 currently (even taking into account the second hand market).
On the other hand, with this socket, you are obliged to take a Xeon E-23xx and they are rather expensive at the moment.
The X11 leaves the choice to use CPUs from the Core or even Xeon range that are less expensive today.
How has your experience been with that board? The X12stl was one on my very short list of x12 boards to consider. Curious if you are doing anything with the Superdom ports? Those are new to me and my basic understanding is they are simply designed for boot drives?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
So I’m looking at and settled on a Fractal Design Node 804.
Be aware, if noise was a reason to go for a consumer case, that the case is not quiet: Drive noise comes straight out through the mesh top.
If noise is not a concern, rackmount servers offer many options.
Should I be considering a SOC board or bite the bullet and look to an X12 with an Xeon E-2300 v5/6 platform??
X10SDV and X11SDV are indeed prime contenders in micro-ATX size. Many come with on-board 10 GbE.
Alternatively, a X11SPM with a (low end) 1st/2nd gen. Xeon Scalable. This may not come out better than a X11SDV tough.
Any way, going for RDIMM is the best option if you want lots of RAM. Xeon E-2300, which is a consumer Core with ECC enabled, is still limited in capacity and UDIMM is more expensive than RDIMM.
 

Torrone

Dabbler
Joined
Nov 15, 2022
Messages
41
How has your experience been with that board? The X12stl was one on my very short list of x12 boards to consider. Curious if you are doing anything with the Superdom ports? Those are new to me and my basic understanding is they are simply designed for boot drives?
I'm still building the system. I'm still waiting for parts to arrive so I can't help you. But other users here seem to have made this choice.
I did not opt for a SATA DOM device because it is super expensive!
Instead, I put an NVME on the M2 port for boot and one on a PCIe port via an adapter for VMs.

You can find my full config here (I'll keep updating this until it's fully functional).
 

safado

Cadet
Joined
Dec 2, 2022
Messages
6
Be aware, if noise was a reason to go for a consumer case, that the case is not quiet: Drive noise comes straight out through the mesh top.
If noise is not a concern, rackmount servers offer many options.

X10SDV and X11SDV are indeed prime contenders in micro-ATX size. Many come with on-board 10 GbE.
Alternatively, a X11SPM with a (low end) 1st/2nd gen. Xeon Scalable. This may not come out better than a X11SDV tough.
Any way, going for RDIMM is the best option if you want lots of RAM. Xeon E-2300, which is a consumer Core with ECC enabled, is still limited in capacity and UDIMM is more expensive than RDIMM.
Thanks for the insight. Noise isn’t a concern but another long large rackmount would be. Perhaps a shorter depth option might work but I’m aiming for a small footprint with this build.

Those SOC boards look very attractive.
 

safado

Cadet
Joined
Dec 2, 2022
Messages
6
I'm still building the system. I'm still waiting for parts to arrive so I can't help you. But other users here seem to have made this choice.
I did not opt for a SATA DOM device because it is super expensive!
Instead, I put an NVME on the M2 port for boot and one on a PCIe port via an adapter for VMs.

You can find my full config here (I'll keep updating this until it's fully functional).
Thanks for the link!! That was a great read. Good luck with your build. Will be following to see how it turns out!
 
Top