Might be a stupid question but here goes.

darkcloud784

Dabbler
Joined
Feb 28, 2019
Messages
25
I have a Truenas 12 beta 2 Core server setup with 112GB of ram. My zpools consist of a total of 55TiB usable space. I've noticed that since I've updated to Truenas, ZFS has not been using all the extra ram. Under normal circumstances I'd say this is a good thing but considering ZFS normally benefits from more ram I assumed this might be a problem. In total TrueNas only seems to be using half my available RAM pool (56 Gigs). This is with 3 jails running on the server and autotune enabled with vfs.zfs.arc_max set to 108171000000. Isn't ZFS supposed to use the rest of the RAM?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Isn't ZFS supposed to use the rest of the RAM?
ZFS will cache up to the limit that you've set... but remember that the cache is cleared with every reboot, so you need to access enough content to re-fill it before it will grow to "full size" again. Are you certain that you've moved enough (unique) data around to take you past the number you're seeing?
 

darkcloud784

Dabbler
Joined
Feb 28, 2019
Messages
25
ZFS will cache up to the limit that you've set... but remember that the cache is cleared with every reboot, so you need to access enough content to re-fill it before it will grow to "full size" again. Are you certain that you've moved enough (unique) data around to take you past the number you're seeing?

I can not say for certain if this is the case but I suppose my understanding of how it allocates ARC was incorrect. I thought ZFS would use ARC for building cache as well as indexing and metadata on all data not just what has been moved.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Check your arc.max. There was a behavior in 12 at one point where arc.max would be set to no more than 50% of RAM, a behavior that came in with the Linux code. I'm not sure whether the fix is in BETA2, I haven't tested it. zfs-stats -A, look for "Arc Size" then "Max Size". When in doubt, set a vfs.zfs.arc.max loader tunable. The tunable understands "G", so you can set it to something like "106G" if you use no jails or VMs; otherwise a little lower to allow room for those.

Edit: vfs.zfs.arc_max is an 11.3 tunable, the corresponding 12.0 tunable is vfs.zfs.arc.max.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I thought ZFS would use ARC for building cache as well as indexing and metadata on all data not just what has been moved.
Yes, but it will only put data in ARC if it's been read at least once. First read after reboot/pool import always hits the disks, and with compression in plan it's entirely possible you haven't requested more than 56G from your vdevs.

Check the additional tunables as @Yorick suggested though; the bridge to the new system brought some different tunables.
 

darkcloud784

Dabbler
Joined
Feb 28, 2019
Messages
25
Check your arc.max. There was a behavior in 12 at one point where arc.max would be set to no more than 50% of RAM, a behavior that came in with the Linux code. I'm not sure whether the fix is in BETA2, I haven't tested it. zfs-stats -A, look for "Arc Size" then "Max Size". When in doubt, set a vfs.zfs.arc.max loader tunable. The tunable understands "G", so you can set it to something like "106G" if you use no jails or VMs; otherwise a little lower to allow room for those.

Edit: vfs.zfs.arc_max is an 11.3 tunable, the corresponding 12.0 tunable is vfs.zfs.arc.max.


Looks like it should be set to 103gb but tuneable shows 108 (unless I'm doing my math wrong).

Code:
ARC Misc:
        Deleted:                                54642509
        Recycle Misses:                         0
        Mutex Misses:                           1391
        Evict Skips:                            1391

ARC Size:
        Current Size (arcsize):         51.00%  52618.23M
        Target Size (Adaptive, c):      51.36%  52983.92M
        Min Size (Hard Limit, c_min):   3.47%   3581.96M
        Max Size (High Water, c_max):   ~28:1   103159.90M

ARC Size Breakdown:
        Recently Used Cache Size (p):   84.45%  44748.20M
        Freq. Used Cache Size (c-p):    15.54%  8235.71M

ARC Hash Breakdown:
        Elements Max:                           5743902
        Elements Current:               99.50%  5715183
        Collisions:                             10918676
        Chain Max:                              0
        Chains:                                 777463

ARC Eviction Statistics:
        Evicts Total:                           6591151047680
        Evicts Eligible for L2:         99.99%  6591129877504
        Evicts Ineligible for L2:       0.00%   21170176
        Evicts Cached to L2:                    871453353472

ARC Efficiency
        Cache Access Total:                     1041453441
        Cache Hit Ratio:                96.84%  1008583824
        Cache Miss Ratio:               3.15%   32869617
        Actual Hit Ratio:               95.52%  994817371

        Data Demand Efficiency:         98.96%
        Data Prefetch Efficiency:       32.24%

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             1.11%   11257302
          Most Recently Used (mru):     6.31%   63689347
          Most Frequently Used (mfu):   92.32%  931128024
          MRU Ghost (mru_ghost):        0.16%   1686632
          MFU Ghost (mfu_ghost):        0.08%   822519

        CACHE HITS BY DATA TYPE:
          Demand Data:                  10.06%  101544625
          Prefetch Data:                1.46%   14774288
          Demand Metadata:              88.45%  892126512
          Prefetch Metadata:            0.01%   138399

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  3.21%   1058100
          Prefetch Data:                94.44%  31042837
          Demand Metadata:              1.66%   548367
          Prefetch Metadata:            0.67%   220313
------------------------------------------------------------


1598582932618.png
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Tunable shows 103,159 MiB or 100 GiB. Keep in mind RAM and storage are counted in steps of 1,024 by computers. RAM is counted in steps of 1,024 by marketers, and disk size in steps of 1,000 by marketers.

Hence “geeks”, finally fed up, abandoned the long-standing tradition that one MB be 1,024 KB, and one KB be 1,024 bytes, changed the sound a little and inserted a letter: If you mean the technical sizes, not the marketing ones, it’s now KiB, MiB, GiB, TiB, PiB, and so in. EiB? ;)
 
Top