Multi-Tier Storage NAS Configuration?

patrickjp93

Dabbler
Joined
Jan 3, 2020
Messages
48
I have a pool of 8 users who work together as a software development operation. At any one time, they don't really use more than 30GB of space, but they have a lot of shared libraries and assets that they'd like to host in an on-site Nexus or Artifactory deployment, preferably on this NAS.

Their build environment would ideally be on flash storage while the shared assets sit on HDDs. Current production profile builds on spinning rust are taking 10 minutes or more, and a POC on a Samsung 850 EVO reduced this to just 48 seconds. However, they'd want the codebase history (git commits) to be in less expensive storage.

So essentially it's a write-through cache setup, but the difficulty comes in slicing up the cache. It would be programmatically simple to give each user an individual 64GB physical SSD, but they don't have the budget for a big 4U rack server. They're looking at 6 1TB HDDs for cool storage and 2 256GB SSDs ideally.

So I guess my question is this: can you configure the L2ARC in FreeNAS to do this slicing on a per-user basis, or which section of the documentation would be applicable?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
So I guess my question is this: can you configure the L2ARC in FreeNAS to do this slicing on a per-user basis, or which section of the documentation would be applicable?
This is absolutely not possible at this time and when I proposed such a concept (without the per-user options) it had little support and would take changes to FreeBSD and Open ZFS to make it happen.

You may find that you get the performance you want by using SLOG and a lot of RAM.

 

JaimieV

Guru
Joined
Oct 12, 2012
Messages
742
1TB SSDs are currently under £100 (Samsung 860 QVO, Crucial BX500). Why not just go SSD throughout?
 

JaimieV

Guru
Joined
Oct 12, 2012
Messages
742
I wouldn't go with QLC,
Good catch - I didn't check what QVO stood for. Less quad-level SSDs are only ~25% more. The upgrade price from 1TB HDDs is negligible vs the time saved in programmer wages either way, plus you wouldn't need the extra two 250gig SSDs. Just go for it Patrick.
 

patrickjp93

Dabbler
Joined
Jan 3, 2020
Messages
48
Good catch - I didn't check what QVO stood for. Less quad-level SSDs are only ~25% more. The upgrade price from 1TB HDDs is negligible vs the time saved in programmer wages either way, plus you wouldn't need the extra two 250gig SSDs. Just go for it Patrick.
Sorry my reply was so delayed. The problem is they need their individual active work areas locked away out of reach of each other, which is easy to do with Citrix, VMWare, Windows Server VMs and NFS over GlusterFS on a Linux host, and it's a ridiculously common corporate solution to big shared network storage. FreeBSD and ZFS maintainers haven't figured this out (or maybe just not worthwhile)? That's honestly mind-blowing.

They have an on-site BitBucket and on-site Nexus they'd like to effectively consolidate together into one storage array with their active work area. Actively, they probably only work with 10-100GB at most at a time, but they do have several TB of legacy data they have to keep around.

Either way, this seems to be barking up the wrong tree. I think I'm going to have to point them to UnRaid and BTRFS if they want to keep this on-site. Otherwise, remote VMs and object storage I think is the best thing for them, something like S3 or Manta.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
1TB SSDs are currently under £100 (Samsung 860 QVO, Crucial BX500). Why not just go SSD throughout?
My system has 140TB... I don't have room for 140 SSDs.
 

klatoszy

Dabbler
Joined
Feb 13, 2020
Messages
13
You can simply have 2 pools - SSD only and HDD only and then distribute your data between pools according to your needs.

OpenZFS has already a feature called Allocation Classes. There is a good reading about it here:
ZFS special-vdev

Hopefully in not very far future this will also be included in FreeNAS.
 

JaimieV

Guru
Joined
Oct 12, 2012
Messages
742
My system has 140TB... I don't have room for 140 SSDs.
Well sure, this was replying to Patrick.

For whom: Individual work areas are entirely feasible on FreeNAS, using either separate shares or separate ownerships of folders in the same share. The same way any other NAS/SAN would manage it. With ZFS you also have some extra protections as you can not only gate areas by permissions but also set each person's space up as separate datasets with their own quotas, underlying compression and other features to get the most for different working patterns.

One of the standard use cases is for FreeNAS to host each user's home directory securely. The same features can be used to give each person an unshared workspace, plus shared group workspace(s).

Going by your first post you seem to be under the misapprehension that not having user-compartmentalised cache means you can't have user-compartmentalised workspace? That's absolutely not the case. The users have no access to anything else in the cache except their own files that are passing through it, it's safe for all users to use. There's plenty of "what is ARC/L2ARC/SLOG" info here on the site, if you want confirmation. Only the OS has access to the content in the caches.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Sorry my reply was so delayed. The problem is they need their individual active work areas locked away out of reach of each other, which is easy to do with Citrix, VMWare, Windows Server VMs and NFS over GlusterFS on a Linux host, and it's a ridiculously common corporate solution to big shared network storage. FreeBSD and ZFS maintainers haven't figured this out (or maybe just not worthwhile)? That's honestly mind-blowing.
Where are you getting the idea that ZFS isn't capable of dividing up storage this way? Separate shares or even datasets can be carved out of the same pool and share that pool's L2ARC safely.
 
Top