AFS (Andrew File System) on FreeNAS?

DylanJustice

Cadet
Joined
Mar 2, 2014
Messages
1
Have there been any success stories serving AFS from FreeNAS?

I've been searching, but getting posts by people who are clearly talking about AFP, not AFS.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
I don't believe AFS is implemented in FreeNAS, even though it appears to be implemented in FreeBSD 9.x.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
If someone does run it from a jail, are you still locked into ZFS for the main file system? Not sure why anyone would want to mix formats like that unless they need to import data and that was it. But then again we all have out quirks.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
You couldn't run AFS in a jail. The jail uses the file system of the UFS or ZFS pool. If you want to run a different file system you'd have to create a zpool, then do an iSCSI device and use that in the jail. Performance would probably be just horrible though.
 

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
You couldn't run AFS in a jail. The jail uses the file system of the UFS or ZFS pool. If you want to run a different file system you'd have to create a zpool, then do an iSCSI device and use that in the jail. Performance would probably be just horrible though.

NM, scratch that, I reread your post.
 
D

dlavigne

Guest
I'm not sure about the jail aspect, but AFS is a network FS so the underlying FS doesn't matter. Solaris has used AFS over ZFS and datasets are well suited for AFS' mountpoints.
 

James Doyle

Dabbler
Joined
Dec 17, 2016
Messages
11
They core strengths of AFS include Kerberized RPCs between clients and fileservers (security), client-side caching, ability to distribute secure fileserver instances securely into geographically and network topologically useful ways. Location independent view of the file cloud. Ability to distribute read-only replicas. This is why despite some of the strange and unattractive issues with AFS, with big Government laboratories. Universities and global, multi-office financial firms - AFS is still king!

The side effects with using with ZFS and FreeNAS:
  • Requires a kernel module to support the Kerberos enabled RPCs and cache manager.
  • AFS server side fileset may not be directly accessed, or exported over CIFS, NFS or AFP.
  • Any requirements to export AFS filespace to NFS, CIFS or AFP requires a filesystem gateway - which is a performance hit and possibly a oppt'y for losing the control of data security. On the upside, it's possible to deploy any number of filesystem gateways instances to mitigate some of the performance risks.
  • No ability to use ZFS snapshots to support AFS volume replicas or volume backups. AFS volumes (plural) reside in a single ZFS zvol on the server, thus inducing the inability to use these features.
  • Despite this, ZFS would permit some AFS server disk access acceleration (through striping, cache and ZIL) as well as strengthen the uptime possibilities of an AFS fileserver. However, on Linux, you'd already be using hardware RAID to gain the same characteristics.
All that said, you'd gain little advantage of using FreeNAS to host an AFS cell... If you are on AFS - all of your clients machines are running AFS clients and they have foresaken CIFS and AFP and NFS long ago. AFS users would benefit from a smaller, Appliance VM image to run AFS Databases and Servers ; and hardware RAID would given them the disk performance and uptime extensions they need.
 

dich

Cadet
Joined
Aug 7, 2017
Messages
8
I wonder if anyone actually succeeded doing this, and if the performance was really THAT bad ?
 

James Doyle

Dabbler
Joined
Dec 17, 2016
Messages
11
Define the constraints of "THAT" bad. AFS was able to scale notoriously well even on low-end server hardware because of aggressive client side caching and the native ability to replicate volumes across a footprint of fileservers and then have clients selectively choose closest server. It's typical to have dozens upon dozens of Fileserver machines and just Three database servers for a Cell of 100,000+ clients. Performance experience is mainly reported on the client, with its local disk cache. AFS on FreeNAS would probably perform exceedingly well. It's a matter of determining if the Overhead induced by bhyve containers is a tolerable use of resources. Otherwise, you'd have AFS appliance servers running elsewhere using FreeNAS over iSCSI on dedicated storage networks....

AFS file (and databases) servers on FreeNAS is absolutely doable and done-able. Create a bhyve VM with Linux, install OpenAFS database and fileserver RPMs and proceed. The AFS /vicepx partitions for the fileserver element will be mapped to raw device ZFS zvols mapped to the VM. Linux will lay down its filesystem on the mapped block device zvol and AFS will manage the inode structure.

AFS administration is a world and skillset unto itself. It would not be practical to try to integrate this into FreeNAS UI as the procedures for maintaining a cell depend very much on its geographical layout (where you place replica databases and where you plant fileserver machines is network and site specific). Further, AFS requires that you nuture and manage its Protection, Volume Location and Backup databases.

An AFS fileserver partition would be mapped to a zvol. However, an AFS fileserver partition contains hundred to possibly thousands of AFS volumes and those AFS volumes are the granularity level of backup not the partition zvol. In order to backup an AFS volume, it must be quiesced - i.e. you can only stably backup and restore a BK or RO volume, not a RW volume unless that volume is offline. Using ZFS snapshots against an AFS fileserver partition that is active is searching for trouble. So, forget about using ZFS snapshots on the /vicepx partitions.

You'd have to create a ZFS volume that would receive the AFS volume dumpstreams (i.e. vos dump or backup). That ZFS receiver volume should be snapshotted and forwarded offline for storage. But not the /vicepx partitions. This is the only plausible value add of FreeNAS for AFS - stable backing storage and storing/forwarding AFS volume dumps as part of the standard AFS backup process.

In any large AFS cell, you need processes and automation to regularly push replicas where needed (vos addsite / vos release), automation to create and BK volumes, automation to dump and export the BK volumes to a place where those dumps will be safely carted off to backup vaulting, etc. It would be a fools errand to try to UI this - as the needs and process vary widely from user to user. For small cells - perhaps a UI with automation for a recommended backup process would help, but it would never suffice for a large, production cell.


-- Jim
 
Last edited:

dich

Cadet
Joined
Aug 7, 2017
Messages
8
Define the constraints of "THAT" bad. AFS was able to scale notoriously well even on low-end server hardware because of aggressive client side caching and the native ability to replicate volumes across a footprint of fileservers and then have clients selectively choose closest server. It's typical to have dozens upon dozens of Fileserver machines and just Three database servers for a Cell of 100,000+ clients. Performance experience is mainly reported on the client, with its local disk cache. AFS on FreeNAS would probably perform exceedingly well. It's a matter of determining if the Overhead induced by bhyve containers is a tolerable use of resources. Otherwise, you'd have AFS appliance servers running elsewhere using FreeNAS over iSCSI on dedicated storage networks....

AFS file (and databases) servers on FreeNAS is absolutely doable and done-able. Create a bhyve VM with Linux, install OpenAFS database and fileserver RPMs and proceed. The AFS /vicepx partitions for the fileserver element will be mapped to raw device ZFS zvols mapped to the VM. Linux will lay down its filesystem on the mapped block device zvol and AFS will manage the inode structure.

AFS administration is a world and skillset unto itself. It would not be practical to try to integrate this into FreeNAS UI as the procedures for maintaining a cell depend very much on its geographical layout (where you place replica databases and where you plant fileserver machines is network and site specific). Further, AFS requires that you nuture and manage its Protection, Volume Location and Backup databases.

An AFS fileserver partition would be mapped to a zvol. However, an AFS fileserver partition contains hundred to possibly thousands of AFS volumes and those AFS volumes are the granularity level of backup not the partition zvol. In order to backup an AFS volume, it must be quiesced - i.e. you can only stably backup and restore a BK or RO volume, not a RW volume unless that volume is offline. Using ZFS snapshots against an AFS fileserver partition that is active is searching for trouble. So, forget about using ZFS snapshots on the /vicepx partitions.

You'd have to create a ZFS volume that would receive the AFS volume dumpstreams (i.e. vos dump or backup). That ZFS receiver volume should be snapshotted and forwarded offline for storage. But not the /vicepx partitions. This is the only plausible value add of FreeNAS for AFS - stable backing storage and storing/forwarding AFS volume dumps as part of the standard AFS backup process.

In any large AFS cell, you need processes and automation to regularly push replicas where needed (vos addsite / vos release), automation to create and BK volumes, automation to dump and export the BK volumes to a place where those dumps will be safely carted off to backup vaulting, etc. It would be a fools errand to try to UI this - as the needs and process vary widely from user to user. For small cells - perhaps a UI with automation for a recommended backup process would help, but it would never suffice for a large, production cell.


-- Jim

Hi Jim,

thanks for the very helpful and detailed reply!
As you said, afs often runs on old underpowered hardware, like in our case. And we have fast performant freeNAS and trueNAS servers, which we are reluctant to convert to afs-only machines, hence the idea of adding another volume to the current cell, served via freeNAS VM/jail, and see if it does a better job. It is understandable, that the zfs snapshotting won't work in a zvol properly, but we have been backing up AFS via TSM in the past anyway, we could keep that going, and if the new volume does a good job, consider phasing out the old hardware finally!
If this doesn't work we may have to phase out AFS altogether...

Bests!
Eduard
 

James Doyle

Dabbler
Joined
Dec 17, 2016
Messages
11
In your case, you have to evaluate the performance overhead of a Bhyve VM running fileserver versus a separate fileserver appliance machine that accesses the FreeNAS storage pool over iSCSI. In addition, you have to consider how much memory the AFS fileserver VM needs to hold state on behalf of its client connections (AFS callback promises) versus how much physical memory ZFS pool needs to remain performant. Its possibly that the Bhyve VM could Rob Peter to Pay Paul as they say. :)

I think your best solution is a small, dedicated AFS fileserver physical machine with sufficient dedicated memory. The AFS fileserver would have a 10Gbe or even 1Gbe dual port interface card. One of the 10Gbe ports to go to a dedicated storage network shared only with the FreeNAS infrastructure. The other 10GBe port would connect to the switch/router/network where all the AFS client connections ingress. Your storage traffic is isolated from your client traffic using Subnets. You'd have done the same thing if you used a Fiber Channel SAN, but since this is FreeNAS, you'll probably have iSCSI over Ethernet like most people. This AFS appliance would need a small SSD boot disk with OS, the OpenAFS server runtime binaries and your keytabs. The small boot disk can be saved aside so that you have a disaster recovery boot image. This is an inexpensive solution (compared to the sunk costs of the FreeNAS storage pool you've already invested) and it would perform very well. You could further optimize this solution and have dedicated fileservers machines that ONLY host RO volumes, if that is possible. Because RO volumes are replicas of the RW volume, could use inexpensive, dedicated disk for these RO-volume-only file servers because disk failures of RO volumes in an AFS cell can be remediated very quickly with VLDB changes. Now, you only need to focus the expensive resources on RW volume traffic.

Phasing out AFS would introduce other impacts. AFS is remarkably resilient to network partitions and failures if you set it up properly (multiple, network diverse database servers, multiple file servers, replicated RO volumes where they make sense). It would be hard to reproduce this type of resiliency with NFS unless you invest in seriously expensive vendor SAN.

-- Jim
 

The Hobbyist

Cadet
Joined
Jun 19, 2017
Messages
9
They core strengths of AFS include ... stuff ... disk performance and uptime extensions they need.


I just logged in to write a thank you for the extensive write up and detailed information. This is like a master course overview of how to visualize the afs usecase domain and something that would take years of experience to pull together of not for your post.

Thanks again.
 
Top