Formula for size of L2ARC needed

Status
Not open for further replies.

Thomymaster

Contributor
Joined
Apr 26, 2013
Messages
142
Hi


I am searching for the forumla on how to size the L2ARC, but i haven't found anything yet.
Another question, i know that the index for L2ARC is stored in RAM, how much RAM do i then need (it is only a theoretical scenario) per GB of L2ARC?


Best,
Thomas
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
5x to 10x, generally. The more L2ARC the more pressure there is on the ARC so just going for massive amounts of L2ARC is problematic in practice.
 

Thomymaster

Contributor
Joined
Apr 26, 2013
Messages
142
Sorry, but why is the ARC then pressured? From what i know, the index of the L2ARC is stored in free RAM and not explizitly in the ARC (which is in RAM as well).

So for 1GB of L2ARC i need 5GB RAM?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Sorry, that's just all wrong. The index for the L2ARC is stored in the ARC in the portion normally reserved for metadata (1/4th of the ARC but tunable). Ideally you don't want to pressure the metadata area too much, but you can increase the size of that in which case you're then pressuring the regular ARC.

Per 5GB of L2ARC you're going to want about 1GB of ARC, though that is merely a loose guideline that throws together a bunch of assumptions such as a pool with a fairly random distribution of blocksizes, and not having puttered with the ARC metadata limit. You do not want to pass 10GB of L2ARC per 1GB of ARC without tuning and knowing what you are doing.

Note that if you have less than 16GB of RAM your first thing should be to upgrade your RAM to 32GB, then you can probably toss a 120GB L2ARC device on and be just fine.
 

Thomymaster

Contributor
Joined
Apr 26, 2013
Messages
142
OK, thanks. Like i said this was only a theoretical question, if i ever run into the need of attaching L2ARC ill use the formula :)
 

mka

Contributor
Joined
Sep 26, 2013
Messages
107
I've just upgraded to 32GiB RCC Ram (if 16GiB ECC modules were available and verified I would have gone to 48GiB) and have one one spare Crucial M4 128GB SSD that is collecting dust.

I just read on a different "best practice" blog that the ARC/L2ARC ratio should be 1:4 which would be just about it. The 5x-10x mentioned ratio above is just for index data and not overall size layout?

When L2ARC compression arrives for FreeBSD 10 will this formula change?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
mka:Yeah, depending on various factors you'll see 1:3 up to 1:10. Two things that make me twitch when i read your post....

1. 32GB of RAM. As a general rule, you shouldn't be adding an L2ARC until you have 64GB of RAM. This is just an observation, but more often than not, there's other reasons to not use an L2ARC when you lave less than this amount of RAM.
2. Adding hardware "because it's collecting dust" is NOT a scientific way to decide you need to add an L2ARC. Not even close. Adding hardware because you can is a totally wrong way to approach ZFS, and some people have paid for it dearly.. with their pools. You can actually add an L2ARC and see performance decrease.
3. If you are a home user and don't have high random I/O(We're talking random reads from the pool, not streaming movies) AND the data is being read at least 5 times in a time frame of a few hours, you will not benefit from an L2ARC. At all. L2ARCs are NOT a read-ahead cache. They cache frequently used data. That's it.

Remember, ZFS is storing your precious data, and you shouldn't be keen on adding more complexity to it unless you can actually benefit. Just from what I've read, you're not searching for reasons that you need to add an L2ARC. You're looking to make sure you can add it so that you can have it. Playing with ZFS for epeen is a dangerous game of russian roulette with your data.

Good luck.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Have so seen an improvement in read-throughput yet? :)

Unless maybe you've got 10GbE you will not see an improvement in read throughput. A single SATA disk should be able to saturate a 1GbE link effectively.

L2ARC is basically about reducing latency, which in this case means disk seeks.

First, L2ARC tends to avoid caching long stretches of sequentially accessed data. Those can be pulled from a pool fairly efficiently so that makes sense.

Next, a single seek operation really shouldn't take long anyways, unless you're already severely in pain (takes several seconds or more to open a short file). That's actually probably the situation where a user would actually notice. But a lot of times, an L2ARC will start working and the effect is more or less gradual in nature. As the pool gets busier the L2ARC shoulders more of the seek load. If you're measuring latency, you should see that. But throughput, no, that's probably the wrong question.

I just read on a different "best practice" blog that the ARC/L2ARC ratio should be 1:4 which would be just about it. The 5x-10x mentioned ratio above is just for index data and not overall size layout?

When L2ARC compression arrives for FreeBSD 10 will this formula change?

L2ARC sizing is affected by many factors. Different people have different rules of thumb based on pool defaults, intended use, etc. It is actually best to put in a small L2ARC, look at the result on the workload and the ARC/L2ARC stats, then maybe do a second round and adjust again. But knowing that users don't want to do that, most people agree that 1:4 or 1:5 ought to be "safe" as long as you have sufficient RAM (32GB+). cyberjock will pipe in momentarily with his opinion that you should have minimum 64GB. And that's okay too, because different uses, different experiences, etc. I've got a 60GB L2ARC on a 16GB MicroServer N36L that was doing dedup.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525

mka

Contributor
Joined
Sep 26, 2013
Messages
107
Have so seen an improvement in read-throughput yet? :)

What do you mean? I havn't added an L2ARC device yet, I'm just considering it. And I've even not installed the additional 16GB ECC Ram I ordered. I will probably open the package today or tomorrow. :)

The main use-case why I'm considering this is for my photo library. I'm a (part time) photographer and currently testing my main photo library residing directly on a FreeNAS network share, while doing hourly snapshots for 2weeks and daily off site backups. Instead of before keeping the files local and backup daily to FreeNAS.

400.000 (mostly raw) photos, 3 TB data and tag and search operations do stress that device. The remaining 10TB won't benefit much from and L2ARC anyway, since its not frequently accessed; in contrast to the photo files.

I currently cannot upgrade beyond 32GB Ram and I have not decided about an L2ARC device yet... or even about keeping my library on that network share instead of local operations. That's why I'm asking cause I currently have no other use for that SSD and trying to find the best solution. I know it's not a read ahead cache but this is a possible gab to fill maybe to enhance the workflow to gain sense in the first place or leave it altogether. Cause currently the operational feel on the local drive is still superior.

Unless maybe you've got 10GbE you will not see an improvement in read throughput.
I'm actual considering this... but probably not in the next 6 months. ATM I've too little experience with it and the cards a still quite expensive.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Your local drive will ALWAYS feel superior for large numbers of small files. That has to do with protocol limitations with network protocols. It sucks, but "that's a fact Jack". I have a directory full of my unorganized pictures from when I was overseas. I haven't organized it because trying to deal with that directory from a network share just sucks.
 
Status
Not open for further replies.
Top