Multiple-system, single share setup?

orsonb

Dabbler
Joined
Nov 13, 2012
Messages
14
I have 30 drives across 3 freenas systems. I want to present them to my local LAN as a single share (NFS, SAMBA, AFP, etc. I don't care).
I'm currently using 2 of the systems to share separately without issues.
I am in the testing phase with the third system before setting it up like the first 2.
Financially the next step is to replace all the 3TB drives with 6TB, or greater, drives. This could take a while.

Ideally I want to make all of them part of the same share.
I see 2 obvious configurations for this:
1) export each drive as an iSCSI device with 1 system sharing them as a single raidz3 pool.
2) have each system export a raidz3 pool as an iSCSI device with one of the systems sharing a stripe.
I am not experienced enough to know all the tradeoffs or issues with either of these configurations, or if there is a better configuration that I should be using.

As part of this process I would also like to upgrade the older systems to the latest build.

Can someone please explain the pros and cons of each?

9.3.1-STABLE, i3-3220T CPU @ 2.80GHz w/ 16GB, 2 SATA expansion cards, 11x3TB Raidz3
9.3-STABLE, E-350, ASUS E35M1-M Pro w/ 16GB, 2 SATA expansion cards, 11x4TB Raidz3
9.10-STABLE, EI-2100, ECS KBN-I/2100 w/ 16GB, 2 SATA expansion cards, 8x6TB Raidz3
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I don't know that I can help with your actual question, but I am academically curious as to how well your FreeNAS 9.3 runs on the AMD E-350?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I have 30 drives across 3 freenas systems. I want to present them to my local LAN as a single share
You can't do that. The protocols are simply not designed for it.

You can cheat by having a single front-end system with storage somehow divided over multiple back-ends, with iSCSI. Of course, at that point, it's easier and cheaper to have a single, beefier server.
 

orsonb

Dabbler
Joined
Nov 13, 2012
Messages
14
I don't know that I can help with your actual question, but I am academically curious as to how well your FreeNAS 9.3 runs on the AMD E-350?
I haven't had any problems with the processor/motherboard. The system rarely gets above 25% on the processor even when running a scrub while serving data. It easily saturates the Gigabit ethernet port.
 

orsonb

Dabbler
Joined
Nov 13, 2012
Messages
14
You can't do that. The protocols are simply not designed for it.

You can cheat by having a single front-end system with storage somehow divided over multiple back-ends, with iSCSI. Of course, at that point, it's easier and cheaper to have a single, beefier server.
I'm not asking to use the final networking protocol(s) to create the pool. I'm asking for the right way to share the drives between the systems so that one of them can create a pool that is then shared through the networking protocol I actually want.
And aren't you contradicting yourself by saying it is not possible, but you can cheat by using iSCSI. Which is what I was asking about using?
How is spending more money easier and cheaper than trying to do what I have asked using the hardware I already have? If it can't be done, I keep doing what I'm already doing without spending more money. If it can, I get an easier to use share without spending any more money. Either way I'm not spending money.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
but you can cheat by using iSCSI
...as the storage back-end for a single front-end that deals with the client protocol. And it would be slow and inefficient, since iSCSI is in the mix. And it would still require management by the admin with some trickery.

It's not a good solution, plain and simple. Especially with RAIDZ for the actual storage.

I'm asking for the right way to share the drives between the systems so that one of them can create a pool that is then shared through the networking protocol I actually want.
There is none. That is the domain of extremely large distributed systems - the stuff generally called "the cloud" these days.

Which is what I was asking about using?
I'll be honest: I stopped reading before I got to that part, in good part because this is not something that is viable.

I see 2 obvious configurations for this:
1) export each drive as an iSCSI device with 1 system sharing them as a single raidz3 pool.
2) have each system export a raidz3 pool as an iSCSI device with one of the systems sharing a stripe.
Holy crap, you want to run ZFS on iSCSI "disks"? That is a genuine Very Bad Idea™. If you want to hack something like this together, you'll want something simpler and faster, like GEOM or some such thing.


The overarching problem here is that you're trying to hammer ZFS and FreeNAS into a configuration which they were simply not designed for. They may seem to fit at first, but the hammering process will almost certainly break things and the fit won't be up to snuff.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
trying to hammer ZFS and FreeNAS
Eric, please try not to use the word "hammer" when we are talking about filesystems. Just for general principles.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Eric, please try not to use the word "hammer" when we are talking about filesystems. Just for general principles.
Squeeze ZFS and FreeNAS?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
If the 3 systems are in the same area, I'd look into rigging some external SAS cabling and just using systems #2 and 3 as JBOD's.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If the 3 systems are in the same area, I'd look into rigging some external SAS cabling and just using systems #2 and 3 as JBOD's.
That's doable, but requires SAS expanders in the "remote" boxes, in practice, since SATA signaling is limited to 1m. A passive solution won't work.
 

Turgin

Dabbler
Joined
Feb 20, 2016
Messages
43
It may not be exactly what you are looking for but you could use Windows Distributed File System (DFS) to achieve a similar result. You could probably even use a VirtualBox VM to do it.

Not sure how reliable it would be, but in theory, it should work.
 

orsonb

Dabbler
Joined
Nov 13, 2012
Messages
14
It may not be exactly what you are looking for but you could use Windows Distributed File System (DFS) to achieve a similar result. You could probably even use a VirtualBox VM to do it.

Not sure how reliable it would be, but in theory, it should work.
Thank you for the suggestion, but I have enough trouble with Windows at work that I don't need to add it at home.
 

orsonb

Dabbler
Joined
Nov 13, 2012
Messages
14
If the 3 systems are in the same area, I'd look into rigging some external SAS cabling and just using systems #2 and 3 as JBOD's.
Thank you for the suggestion. I tried that when I was setting up the second system, and had a horrible time finding reliable hardware. To the point that I just gave up on SAS altogether.
 

NightNetworks

Explorer
Joined
Sep 6, 2015
Messages
61
Two things come to mind.... but I am not sure either would work or how good performance would be.

Option A:
Windows Server Distributed File System, I think that this would be the most likely item to work... However there is a lot of setup and can be a pretty big pain in the butt to troubleshoot down the road esp if I read right and this is for a home setup.

Option B:
You might be able to use Symbolic Links, but I am not sure...
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
Eric, please try not to use the word "hammer" when we are talking about filesystems. Just for general principles.
Distributed File System? It's HAMMER time!

giphy.gif

And it will end as well as MC Hammer. Sure, it might spectacularly explode, but maybe in the end you'll come to God and it will all work out.
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
Two things come to mind.... but I am not sure either would work or how good performance would be.

Option A:
Windows Server Distributed File System, I think that this would be the most likely item to work... However there is a lot of setup and can be a pretty big pain in the butt to troubleshoot down the road esp if I read right and this is for a home setup.

As far as I can see, Samba can do that with some caveats: https://wiki.samba.org/index.php/Distributed_File_System_(DFS)

But of course this won't work for other protocols.

Option B:
You might be able to use Symbolic Links, but I am not sure...
This won't work.

Overall, this is an odd requirement. I can perhaps understand trying to group all the samba shares together, but the others? Seems like you're just creating extra work for yourself in addition to performance overhead and possible stability issues.
 

NightNetworks

Explorer
Joined
Sep 6, 2015
Messages
61
I agree with Anodos... Is there a particular reason that you are trying to do this for? Knowing this might help with coming up with a better idea.
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
Purchase 2 used E16 chassis's. They are SAS2 and hold 25 drives each. Turn the second into a JBOD box and run external SAS to it. You might have a bad experience with SAS but I have a feeling it was more to do with incomparable hardware. SAS is the way to go with your scenario.

Just thinking of attempting to run a pool across 3 boxes using 1GbE to connect just gives me chills. Oh the week long scrubs and all that power usage. If you can do with 25 drives than a single enclosure will definitely cut costs by more than half.

I don't really see any other way that is cost effective. Well to have 3 pools and shares is the most cost effective as you already have the hardware.


Sent from my iPhone using Tapatalk
 

orsonb

Dabbler
Joined
Nov 13, 2012
Messages
14
While I appreciate the sentiment behind everyone telling me to spend more money, I'm not going to do that.

What I am trying to do is get the existing hardware to handle the arduous task of deciding which existing server should hold which piece of data. Why should I care, or even know, which server a particular piece of data is on?
I thought of 2 ways of doing this using what is available in the FreeNAS UI. I expected someone to point out that the first option was overly constrained by the network. What I didn't expect was a blatant disregard of the second option.
After reading some of the responses here, and when I finally had time again, I read the FreeNAS documentation on iSCSI. iSCSI is exactly what I thought it was and ZFS includes specific features for working with it, despite claims to the contrary here.

As far as the Samba option mentioned by Anodos, those are some pretty big caveats. I will have to think hard about if that is useable.

Can anyone give me a better alternative that does not require spending money?
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
You seem unhappy with the advice you have received. That is regrettable.

I can assure you, Anodos, and Erieloewe, and depasseg, are subject matter experts; it is quite unlikely that their advice is not functionally the best advice. When people are unhappy with our advice, they might either engage ixSystems itself and investigate commercial options, or they might engage another forum (e.g., the guys on reddit in the "freenas" related fora) where advice is more "wild west" and comes, in many cases, from people who have not had...success...integrating with the FreeNAS forum community. :) That being said, you might get advice for your situation there that you are more happy with.

Most of the advice you will get, here, will be of the extremely conservative, money-spending, type. We are notoriously bad at giving advice to people who are trying to make subideal hardware work, who have highly constrained financial philosophies, or whose NAS has Rube Goldberg-esque properties.

What you are trying to set up is quite a bit off the beaten path for a FreeNAS appliance, that's all.

Again, it is regrettable that you are displeased with the suggestions and advice you've gotten. Good luck. Perhaps someone will chime in.
 
Top