SOLVED 100tb build, 128gb ram necessary?

Mike77

Contributor
Joined
Nov 15, 2014
Messages
193
Hi,

I remember that there used tot be a rule of thumb that you needed 1gb of memory per tb of storage. Now I was wondering of this still applies tot a new machine with 100tb.

I'd like to know this, because I probably have to decide between a 64gb system, a 128gb system or larger.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I remember that there used tot be a rule of thumb that you needed 1gb of memory per tb of storage
It was always intended to be a very loose rule of thumb (as @jgreco, who IIRC created it, can attest), especially past around 16 GB of RAM. With larger amounts of RAM, the question really becomes, what do you intend to do with the system?
 

Mike77

Contributor
Joined
Nov 15, 2014
Messages
193
danb35 thanks for the quick reply.

To answer your question/remark:"Serve files."

And I guess that this is a thing, where I can use more hardware and then do more, or use less and Just use it as a JBOD.

So it all depends on the hardware I have to put into it, tot build a large and fast file server.

It needs tot be able to serve files tot about 10 cliënts simultaneously at about 100Mb per second. So maybe I should use a zil and a L2ARC and therefore more memory.

But what does a 100tb ZFS need to work without problems.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Serve files over SMB? It will likely "work without problems" with 64 GB, and probably even less. But more RAM means more caching, and yes, L2ARC would probably be helpful--SLOG not so much.
 

Mike77

Contributor
Joined
Nov 15, 2014
Messages
193
danb35, thanks for your reply.

That gives me a lot more options (like an i3 system). I thought that I needed 128Gb atleast. It's good to know that 64gb or even less is enough for a simpel JBOD.
 
Last edited:

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I am running around 96 TB net capacity (around 25 TB used) in RAIDZ2 (8*16 TB) with 64 GB. Largely for big media files, source code, and office documents. So very low requirements in terms of IOPS.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It was always intended to be a very loose rule of thumb (as @jgreco, who IIRC created it, can attest),

I certainly did not create the root of that rule, it came from earlier ZFS.

However, I substantially expanded and loosened it and have spent a huge amount of time explaining it over the years. The "rule" only exists to get people in the right ballpark. There's nothing specific about ZFS where it will start drowning puppies if you don't have 1GB per 1TB of memory, but if you try to run a 30TB pool with only 8GB of RAM, you'll find that there's a lot of memory contention between the ARC, desperately trying to keep pool metadata for free space etc., in core, and the middleware, trying not to be swapped out, and performance tanks. (We also saw corruption and kernel panic issues early on with severe mismatches, especially on 4GB and 6GB systems.)

But if you give that 30TB of storage 32GB of RAM, what does the "30TB" mean? Does it mean 6 x 5TB HDD (30TB raw) space? Does it mean 8 x 5TB HDD in RAIDZ2 (30TB pool) space? Does it mean 30TB==80% of a 38TB pool? Does it mean 30TB==worth of stored data on a 100TB pool?

Because it could plausibly mean any of those things, but some of those will be more performant than others, and once you're out at 32GB of RAM, all of those are likely to be workable for at least some workloads. For enterprise use, where someone with a checkbook simply needed to know what to buy, I always encouraged them to use the raw disk interpretation, because that would give them the best experience, while budget hobbyists were told that they could take a less expensive option if they needed to. This overall description and explainer is definitely something I created.
 

Mike77

Contributor
Joined
Nov 15, 2014
Messages
193
Thanks Guys,

It's clear to me regarding a future new 100tb server.

Buy it's still a good question how low you can go regarding ram. If I could use a 32Gb ram system to run two 6x10gb Pools as a backup only server, it might even be worth buying a new supermicro x10sl7-f replacing my current broken system.

That would be great. Still trying tot repair it. That would be a usecase for the 32Gb of ECC and the xeon.

(Edit: all kinds of spelling mistakes and typo's)
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, we may(?) have left the days of RAM starvation leading to pool corruption behind us, but there's still a strong case to be made that being short of ARC, especially for metadata purposes, significantly degrades the performance of your typical pool.
 
Top