Adding a L2ARC without nuking the pool

Status
Not open for further replies.

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
So, I installed FreeNAS convinced I could switch from iSCSI to NFS and get decent performance, tested it, and discovered that I was very wrong unless I want to do lots and lots of tuning and testing. Which I'm not willing to do right now, though I probably will later.

So I've pulled the SLOG out of the pool and restarted testing of NFS performance (based on the estimated completion time the SSD SLOG was making zero impact on synch write performance, interestingly), so now I'd like to add the SSD back to the pool as an L2ARC. Based on a previous discussion the consensus seems to be that a 120G L2ARC in a machine with 72G RAM is reasonable.

So I go to Storage, select the pool, enter the ZFS Volume Manager, select the SSD, inform the server that I'd like to add it as an L2ARC, then notice the big red warning on the button that says "existing data will be cleared."

This means that the data on the SSD will be wiped out in its migration from SLOG to L2ARC device, not that pool data will be impacted, right?

I'm just trying to make sure I understand what big red warnings mean before I push big red buttons...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
This means that the data on the SSD will be wiped out in its migration from SLOG to L2ARC device, not that pool data will be impacted, right?

Assuming you didn't pick any other drives, yes.
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
I have an older 120 GB SSD sitting here doing nothing at the moment. It's a great drive and works perfectly, so I'm thinking of using it as L2ARC for my main pool. Looks like adding it is easy enough.

But if I decide I no longer want it there, can it be easily removed as well? (i.e., without recreating the pool, etc.)
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
I've done that a few times now on the current release version of FreeNAS, and the pool is still there...
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
Actually, I've removed a ZIL-dedicated drive as well, without destroying the pool.
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
Wow, that's very interesting... did you have to offline the pool first? Everything I've read seems to indicate that the ZIL becomes a member of the pool and cannot be simply removed because you could have outstanding log writes pending... which is why everyone recommends your ZIL be a mirror...
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
The pool was reasonably idle when I did it, yes, but virtual machines were still running.

My understanding is different that yours:
  • ZFS maintains all of the writes it's going to make in memory (along with other data), and this flushes to the filesystem periodically as a transaction.
  • Writes that must be written to stable storage are sync writes, and they can be written to a separate ZIL device for speed reasons.
  • Regardless, the only time the ZIL is read is if the machine reboots unexpectedly, in which case pending writes will be read from the ZIL and written to the pool
  • In most cases, the ZIL is never read at all.
So yeah, I might have been vulnerable to data loss if my machine had died before the next transaction was written a few seconds later, but I didn't worry about it.
 
Status
Not open for further replies.
Top