Planning to switch from core to scale - Should I use separate pools for Apps and VM?

Joined
Apr 24, 2023
Messages
9
Hi, first time posting. I am currently using Core but are tempted to switch to Scale.

I have a 1x Kingston 120G SSD for boot and another 120G SSD for plugins (Emby/Syncthing), no VM

I am seeing myself running more VM once I switch to scale and the current 120G SSD will not cut it. So I might be switching all the SSD to NVME.

Question is: Should I get separate NVME SSD for Apps and VM? If that the case there will be minimum 3 NVME running (1 boot 1 apps 1 VM)? Also, is it a MUST to create mirror for Apps and VM pool? (When I test Scale in a VM, I tried to create a single disk pool for Apps and got a warning). Does anyone using mirror pool for Apps/VM?

Thx
 

unseen

Contributor
Joined
Aug 25, 2017
Messages
103
It depends how much trouble and down time you are prepared to put up with if a drive fails.

I have a similar setup in my current TrueNAS CORE system, except that both my boot and jail volumes are mirrored SATA SSDs. Having to dismantle the machine to replace a failed device is enough hassle for my taste. Having to restore the boot or jail devices from a backup is a pain too far - what if the backup turns out not to have some recent changes, or in the worst case, is not functional?

While NVMe SSDs are great, remember that each one will cost you four PCIe lanes and that using an add-in PCIe card to run the NVMe drives on requires that your motherboard and BIOS support bifurcation for the slot that you intend to use.
 

victort

Guru
Joined
Dec 31, 2021
Messages
973
Best practice for jails is to have them on a mirrored pool, and mounting your data from a separate RaidZ-x pool. This might also be best for VMs but I don’t really use them, as I can run all my services I need in jails.

VMs do best on mirrors because of the speed.

Single disk (Stripe) is not advisable at all because if it fails, you’re done.
 
Joined
Apr 24, 2023
Messages
9
It depends how much trouble and down time you are prepared to put up with if a drive fails.

I have a similar setup in my current TrueNAS CORE system, except that both my boot and jail volumes are mirrored SATA SSDs. Having to dismantle the machine to replace a failed device is enough hassle for my taste. Having to restore the boot or jail devices from a backup is a pain too far - what if the backup turns out not to have some recent changes, or in the worst case, is not functional?

While NVMe SSDs are great, remember that each one will cost you four PCIe lanes and that using an add-in PCIe card to run the NVMe drives on requires that your motherboard and BIOS support bifurcation for the slot that you intend to use.
Thanks for your reply. You did point out something I didn't consider: PCIE lane

I was thinking about using this card which is capable for 4 NVME SSD: https://www.supermicro.com/en/products/accessories/addon/AOC-SHG3-4M2P.php

But if you look at the spec it said PCI-E x8. If fully occupied, wouldn't each NVME SSD only has 2 lane??

My mobo is X10-SRI-F, the spec: 2 PCI-E 3.0 x8, 1 PCI-E 3.0 x4 (in x8), 1 PCI-E 3.0 x16, 1 PCI-E 2.0 x2 (in x8), 1 PCI-E 2.0 x4 (in x8)

I already have 1 X540 card occupy one x8 slot, which left only one x8 slot left.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
But if you look at the spec it said PCI-E x8. If fully occupied, wouldn't each NVME SSD only has 2 lane??

No, of course not. The purpose of including the pricey PLX switch on this card is to allow all the M.2's to share the x8. It means that you cannot use all of the potential capacity of the x4 M.2's simultaneously, but on the other hand, most of the time, you are not doing full speed parallel I/O to all your M.2's. So if M.2 #1 and M.2 #4 are both in use, they both get full bandwidth, or if #1 #3 #4 are in use, each gets about 66%.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Question is: Should I get separate NVME SSD for Apps and VM? If that the case there will be minimum 3 NVME running (1 boot 1 apps 1 VM)? Also, is it a MUST to create mirror for Apps and VM pool? (When I test Scale in a VM, I tried to create a single disk pool for Apps and got a warning). Does anyone using mirror pool for Apps/VM?
I separate my data and VM pool for the single reason of performance. I used to have them all on the same pool and it works..... but performance was atrocious and using the VM's made me want to shoot myself in the head especially when I'm doing some intensive file I/O in the VM (like installing some large software or copying large files). I migrated the VM pool to mirrored enterprise SATA SSD's and the VM's are actually usable and restores my sanity.

I don't really use the apps in any production capacity, just in an experimental way. In my experience, those don't really have the same performance requirements and so far, it's been perfectly fine running on the same pool as the data pool, but again, I'm not really using these in a production capacity, so take it with a grain of salt.
 
Last edited:
Joined
Apr 24, 2023
Messages
9
No, of course not. The purpose of including the pricey PLX switch on this card is to allow all the M.2's to share the x8. It means that you cannot use all of the potential capacity of the x4 M.2's simultaneously, but on the other hand, most of the time, you are not doing full speed parallel I/O to all your M.2's. So if M.2 #1 and M.2 #4 are both in use, they both get full bandwidth, or if #1 #3 #4 are in use, each gets about 66%.
Mmm...so basically each SSD is stealing bandwidth from each other....
Let's say I use the Supermicro AOC-SHG3-4M2P, 2 SSD for boot mirror and 2 SSD for Apps/VM mirror, how much IO does a boot pool actually need after the NAS is booted up? I suppose it is not much. If that's the case, would the Apps/VM pool got all 8 lane?
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Mmm...so basically each SSD is stealing bandwidth from each other....
Let's say I use the Supermicro AOC-SHG3-4M2P, 2 SSD for boot mirror and 2 SSD for Apps/VM mirror, how much IO does a boot pool actually need after the NAS is booted up? I suppose it is not much. If that's the case, would the Apps/VM pool got all 8 lane?
Boot pool doesn't require much and can even be put on USB SSD if you want (some people do). I wouldn't use a USB stick though since those tend to die fast under write-heavy environments (ie. TrueNAS logging). I myself use an ancient 60 GB SATA SSD back from the early days of SSD (10 years ago) when they still make those ultra small sizes
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Mmm...so basically each SSD is stealing bandwidth from each other....

No. In a typical system, you aren't going to be moving 30GBytes/sec simultaneously to four SSD's. Those speed numbers you see for SSD's where you're moving "7450GBytes/sec" are under highly optimized fully sequential workloads that you are not likely to obtain in real life. You're much more likely to get about a third or quarter of that.

Let's say I use the Supermicro AOC-SHG3-4M2P, 2 SSD for boot mirror and 2 SSD for Apps/VM mirror, how much IO does a boot pool actually need after the NAS is booted up? I suppose it is not much. If that's the case, would the Apps/VM pool got all 8 lane?

The boot pool needs very little. But, no, the apps/VM pool would not get all 8 lanes. That's not how this works. It's a PLX switch. Each device will get a share of the available bandwidth. Just like an ethernet switch doesn't guarantee you 1Gbps even if you have a 1Gbps port. It depends on what traffic is traversing the switch.
 

unseen

Contributor
Joined
Aug 25, 2017
Messages
103
Thanks for your reply. You did point out something I didn't consider: PCIE lane

I was thinking about using this card which is capable for 4 NVME SSD: https://www.supermicro.com/en/products/accessories/addon/AOC-SHG3-4M2P.php

But if you look at the spec it said PCI-E x8. If fully occupied, wouldn't each NVME SSD only has 2 lane??

My mobo is X10-SRI-F, the spec: 2 PCI-E 3.0 x8, 1 PCI-E 3.0 x4 (in x8), 1 PCI-E 3.0 x16, 1 PCI-E 2.0 x2 (in x8), 1 PCI-E 2.0 x4 (in x8)

I already have 1 X540 card occupy one x8 slot, which left only one x8 slot left.

That's an interesting way to get around the "one 4x channel per NVMe" requirement!

My NVMe card is much simpler (and cheaper) and holds only two NVMe devices.

Like "Whatteva", I'm lucky enough to have two 64GB SATA SSDs from when you could still buy drives that small. That pair is for my boot mirror and are connected to two motherboard SATA connectors.

I am using an ASRock Rack X470D4U motherboard, which has three PCIe slots with physical (electrical) sizes of x16(x16) x8(x4) and x16(x8). The first x16 slot can only be (x16) if the other x16(x8) slot is not used.

I am using the two x16 sized slots configured as (x4,x4) and (x8) to support the NVMe card in the first slot and my 9211-8i HBA in the second slot. All my spinning rust goes on the HBA so that I can fill all 8 hot-swap drive bays with disks if needed (I'm only using four at the moment with two disks on each HBA channel).

In the end though, you'll get the best performance and reliability by splitting up groups of redundant storage by function (boot pool, high speed pool, data pool) and connecting those groups to independent interfaces.
 
Joined
Apr 24, 2023
Messages
9
Thanks everyone for their input so far. Now I am struggling between 2 supermicro card:
  1. AOC-SHG3-4M2P - 4 NVME SSD, PLX switch, no bifurcation problem
  2. AOC-SLG3-2M2 - 2 NVME SSD, Bios need bifurcation

My MB is X10-SRI-F, so I think it shouldn't have problem with bifurcation.

But now I have another problem - the card need to be placed next to my X540-T2 10GBE card.

The MB has 2 PCI-E X8 slot placed side by side, so both card will be very closed together. If you have a 10GBE card you will know how HOT it is.

Also, if I choose the AOC-SHG3-4M2P, I am worry if the PLX heat sink might touch the 10GBE card.

Any thoughts??
 

unseen

Contributor
Joined
Aug 25, 2017
Messages
103
But now I have another problem - the card need to be placed next to my X540-T2 10GBE card.

The MB has 2 PCI-E X8 slot placed side by side, so both card will be very closed together. If you have a 10GBE card you will know how HOT it is.

Also, if I choose the AOC-SHG3-4M2P, I am worry if the PLX heat sink might touch the 10GBE card.

Any thoughts??

That would certainly worry me. Having the heat sink touch the back of the other card is not allowed - it could cause a short circuit! Even if there's a small gap, a very hot card next to another which can generate lots of heat sounds like a recipe for disaster.

I'm lucky enough that my two 8 lane capable slots have a 4 lane slot between them. That allowed me to get a 10mm x 40mm fan mounted on the heat sink for my HBA card and the NVMe card with its two drives are closest to the edge of the motherboard where the 120mm exhaust fans above it seem to create enough turbulence to keep them under 35°C at all times.

Considering how much heat the HBA pushes out, I'd not feel good about having another high temperature card right next to it unless I had very good front to back air flow, or an extra internal fan blowing on both of them.
 
Joined
Apr 24, 2023
Messages
9
@unseen Thanks, so I guess I will get AOC-SLG3-2M2 then (2xNVMESSD), no heat sink needed. I probably will zip tie a 80mm fan on top of both cards to get some air going.....

BTW, today when I check the spec of X540T2 NIC [1], I notice it says: System Interface Type PCIe v2.1 (5.0 GT/s)

What is PCIe 2.1??? I thought it is either ver. 2 or 3...??

[1] https://ark.intel.com/content/www/u...thernet-converged-network-adapter-x540t2.html

Just for jokes and giggles, I even think of using a PCIe riser ribbon cable to move the card to another slot. I just need to change the bracket from full height to half height (so it is kinda hanging)......You think that will work? (:D)
 

unseen

Contributor
Joined
Aug 25, 2017
Messages
103
I think it's just a minor spec revision - it still amounts to PCIe 2, but that x8 is more than fast enough for your 10Gb Ethernet card.

Seeing as it is only PCIe 2, maybe the card will work on a PCIe extender. I'd try just about anything to avoid long term, chronic overheating, It tends to cause those nasty little "once in a blue moon" problems that will have you cursing later on.
 
Joined
Apr 24, 2023
Messages
9
I think it's just a minor spec revision - it still amounts to PCIe 2, but that x8 is more than fast enough for your 10Gb Ethernet card.

Seeing as it is only PCIe 2, maybe the card will work on a PCIe extender. I'd try just about anything to avoid long term, chronic overheating, It tends to cause those nasty little "once in a blue moon" problems that will have you cursing later on.
I just check the manual again, only the 2 PCIe 3.0 slots are 8x and they are next to each other, the PCIe 2.0 slots are either x2 or x4.....

No choice but to put 2 cards next to each other...........

I start this only to move from core to scale....now I am thinking about changing the mobo :P
 

unseen

Contributor
Joined
Aug 25, 2017
Messages
103
Well, as long as the cards don't physically touch and you can get a fan set up to give air to both of them, it would be worth a try.
The NVMe drives will report their temperature and you can always tape an insulated temperature sensor to the back of your Ethernet card to keep an eye on that.
I guess you don't really have much other option, but as long as you can add some extra air flow and keep an eye on the temperatures, then you can react if there's not enough cooling.
 
Top