New Build Questions

blanesmith

Cadet
Joined
Feb 24, 2023
Messages
4
I am needing a bit of help on hardware selection, disk, cache, SLOG, L2ARC for a build.

Our organization currently has a 27 TB file share on Windows Server, using 6 x 6 TB HGST Ultrastars in RAID 5.

We are running out of space and we would like to pursue a TrueNAS Scale solution.

We also have a server that was donated to us that we would like to use.

Supermicro CSE-826 chassis with (12) 3.5" bays and trays and dual 920 Watt PSUs.

Supermicro X10DRC-T4+ motherboard. https://www.supermicro.com/en/products/motherboard/X10DRC-T4+

Before buying CPUs, I was thinking of dual CPUs, Xeon E5-2697 v4 or dual Xeon E5-2697A v4. Not sure if there is enough of a difference.

Before buying RAM, I wanted to understand how to properly calculate what I would need based on TrueNAS Scale requirements, etc. I was considering maxing it out with 24 x 32GB DDR4-2400 RDIMMs.

We would like to double our storage to 60 TB of usable space and also host VMs.

We currently host a WSUS, Print Server, Application Server (simple IT tools), and 2 IT Jump "boxes".

Any help from the community on this project would be greatly appreciated.

Thank you!
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Welcome!

We need more details on the use-case. Especially from a workload/performance perspective. You can probably imagine that it makes a considerable difference whether a VM hosts a Unifi Controller or Pi-Hole vs. the DB backend for an entire factory.

TrueNAS Scale is not as stable as Core and, since you write about an "organization", this may be an issue. Again, more background on the use-case would be helpful.

SLOG is not something that makes sense for every scenario. So can you tell us why you consider it? For more background please read this:


For hosting VMs, I would consider using a dedicated pool with only SSDs.

For more background on various topics, please also check the "Recommended readings" in my signature.
 

blanesmith

Cadet
Joined
Feb 24, 2023
Messages
4
@ChrisRJ

Thank you for your response.

Here is a more about our organization. We are a church with daytime staff of about 15. Most use the file share for office documents, etc. We also have a creative team of 4 and they will be transferring a out 20 to 30 GB of video footage, mostly on Sundays.

As for SLOG, I read through the SCALE Hardware Guide and saw this mentioned. Was unsure if it was needed, but when we do projects, we do them for 3 to 5 years longevity in mind and want to avoid any re-work, etc. if possible. So wanted to see if specifying one now in the build would make sense.

We are close to out of space with 27 TB, most of that is video storage and creative arts team's files. Based on file sizes and continued usage, I am looking to double the capacity of the new NAS to ~60TB

Our VMs:

Windows 10 Enterprise (8GB RAM, 2 vCPU, 100GB vhdx) - Jump box for working on the network.

Windows 2016 Server (16GB RAM, 1 vCPU, 100GB vhdx) - Print Server with 2 printers.

Windows 2016 Server (16GB RAM, 2 vCPU, 100GB vhdx) - Secondary Domain Controller

Windows 2016 Server (16GB RAM, 2 vCPU, 100GB vhdx) - Runs Azure AD Connect, FM Audit (Toshiba Printer Utility for print quota billing), and Action 1 Deployment Utility

Windows 10 Enterprise (16GB RAM, 2 vCPU, 100GB vhdx) - Runs Plex Server - Would like to move this to a docker. Media is stored on file server share.

Hope this is helpful.

Thank you!
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I would personally avoid to put everything into a single box and instead have one machine for NAS and one for the VMs. It may be less efficient, but it reduces complexity. The latter means a smaller risk of something going wrong and that was also the reason why I have my setup like it is:
  • NAS with TrueNAS Core: Considered business critical, I mean the of my business are on it
  • Small ESXi box (2 core Celeron 3900 with 16 GB RAM): Always-on VMs, not powerful but very power-efficient
  • 2 XCP NG boxes with 8C/16T and 128 GB RAM each: Powered up on demand for project work that needs more punch

Either way, my recommendation would be to run the VMs off of SSDs only. The workloads do not seem to be hugely write-intensive, so a good(!) consumer SSD should be ok. Avoid the QLC ones, but go for TLC and be prepared to replace them after a few years. Chances are they last much longer, but still. @jgreco has been very successful with this approach and perhaps he can add something ...

The VMs should be backed up at least daily to the regular HDD pool. Since you don't seem to hold critical data on the VMs, that approach should be ok; perhaps the schedule needs adjustment.

SLOG is something you would not need when running the VMs off of SSDs. If you haven't done so already, you should really read the providede resource.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
@jgreco has been very successful with this approach and perhaps he can add something ...

Just try to make a good estimate of your actual write workload. I've been using carefully picked consumer grade SSD's for at least a dozen years, with very few unexpected burn-outs and also a very high percentage of predicted burnouts.

For example, back in ... was it 2015? The Intel 535 480GB was the hot stuff at about $170/each on Black Friday. This drive had an endurance rating of 40GB/day or 73TBW; when I pulled the hypervisor write stats, I was able to come up with a credible 100-300TBW endurance requirement. The price per GB had been in significant decline in 2015, so I made a bet that at worst I would replace the drives with a higher endurance lower cost part in perhaps a year or two. I was deploying them as RAID 1 with a hot spare, so that was great. Basically I bought these expecting to burn them out. Most of them did eventually burn out, but a few lasted to the five year warranty expiration -- for the warranty burn-outs, Intel RMA supplied newly manufactured Intel 545s SSD's.
 

blanesmith

Cadet
Joined
Feb 24, 2023
Messages
4
I would personally avoid to put everything into a single box and instead have one machine for NAS and one for the VMs. It may be less efficient, but it reduces complexity. The latter means a smaller risk of something going wrong and that was also the reason why I have my setup like it is:
  • NAS with TrueNAS Core: Considered business critical, I mean the of my business are on it
  • Small ESXi box (2 core Celeron 3900 with 16 GB RAM): Always-on VMs, not powerful but very power-efficient
  • 2 XCP NG boxes with 8C/16T and 128 GB RAM each: Powered up on demand for project work that needs more punch

Either way, my recommendation would be to run the VMs off of SSDs only. The workloads do not seem to be hugely write-intensive, so a good(!) consumer SSD should be ok. Avoid the QLC ones, but go for TLC and be prepared to replace them after a few years. Chances are they last much longer, but still. @jgreco has been very successful with this approach and perhaps he can add something ...

The VMs should be backed up at least daily to the regular HDD pool. Since you don't seem to hold critical data on the VMs, that approach should be ok; perhaps the schedule needs adjustment.

SLOG is something you would not need when running the VMs off of SSDs. If you haven't done so already, you should really read the providede resource.
Thank you for your help and I will definitely read the provided resources.
 

blanesmith

Cadet
Joined
Feb 24, 2023
Messages
4
Just try to make a good estimate of your actual write workload. I've been using carefully picked consumer grade SSD's for at least a dozen years, with very few unexpected burn-outs and also a very high percentage of predicted burnouts.

For example, back in ... was it 2015? The Intel 535 480GB was the hot stuff at about $170/each on Black Friday. This drive had an endurance rating of 40GB/day or 73TBW; when I pulled the hypervisor write stats, I was able to come up with a credible 100-300TBW endurance requirement. The price per GB had been in significant decline in 2015, so I made a bet that at worst I would replace the drives with a higher endurance lower cost part in perhaps a year or two. I was deploying them as RAID 1 with a hot spare, so that was great. Basically I bought these expecting to burn them out. Most of them did eventually burn out, but a few lasted to the five year warranty expiration -- for the warranty burn-outs, Intel RMA supplied newly manufactured Intel 545s SSD's.
Thank you.
 
Top