The dashboard now counts all jails and VMs as a "service", as well. Which does make sense.
You can:
- Enable SSH service and root login for it
- Log in via SSH and make the terminal as tall as your screen will support
- Run
top -a
- Hit the letter
o
and then type in
size
That'll give you a quick overview of everything taking up memory in your system.
In my case, for example, Logitech Media Server likes to eat between 2 and 5GiB by itself, followed by traefik (a jail) and a few instances of node (also jails).
If I shut all jails down, it still uses 6.1 GiB for services. A good chunk of that in Python 3.8 (middlewared) and smbd. Middleware "eats" 2.2 GiB on my system, across a few instances of Python 3.8.
Hmm. I just booted TrueNAS fresh, and "services" shows 4.4 GiB when all jails are down. top shows about the same per-process memory usage as when it's been running for a while, but also shows 2139M Active, 624M Inactive, 1494M Wired.
Which means the FreeBSD kernel starts munching on a bit more as time progresses. Or maybe Inactive goes up, I'll need to watch that.
- Active: Memory currently being used by a process
- Inactive: Memory that has been freed but is still cached since it may be used again. If more Free memory is required, this memory can be cleared and become free. This memory is not cleared before it is needed, because "free memory is wasted memory", it doesn't cost anything to keep the old data around in case it is needed again.
- Wired: Memory in use by the Kernel. This memory cannot be swapped out
Note "Inactive" appears to be counted by the dashboard, but is really available in a pinch. I can see an argument for showing it as part of "Services" in the dashboard, or not to do that.
Edit: Nope I don't know how the dashboard arrives at its services number. It's not just Active+Inactive+Wired. It could be Active+Inactive+Wired-ARC. That gets me in the ballpark, keeping in mind top shows "G" for Wired and ARC, which is not very precise.
On the gripping hand: You have 32GiB and a small-ish pool. I'd be shocked if you'd be running to the limits of RAM with ARC. I've set arc.max to 24G for example, so I have room to play with VMs, and it doesn't "crimp" ARC in any way.