creating a vSAN environment

thegreek1

Dabbler
Joined
Sep 20, 2020
Messages
21
Hi,

now that I got the hardware working on my p700 workstation, I was wondering how to use NFS / iSCSI to mount to a remote file system to increase the storage capacity? My p700 workstation is max'ed out in the available 4 HDD slots and I believe I can add 1 more internal drive (2 if I remove the CDROM). how would you guys go about mounting a remote NFS/iSCSI virtual drive to create more storage?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
how would you guys go about mounting a remote NFS/iSCSI virtual drive to create more storage?

On the TrueNAS host? You don't. It's not designed for this. ZFS isn't designed for this. ZFS will generate crushing amounts of I/O out towards pool devices unless you have something like 10GbE or 25GbE direct to the shelf, and even then, it is not supported. The only attachment technologies supported by TrueNAS are SAS and SATA.

You want more storage, go out and get yourself a nice SAS disk shelf. You can add 12, 24, 36, 60, 90 hard drives that way without much trouble.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
There are smaller, cheaper options, like the this one. I have no knowledge of it's reliability nor anything else. Just pointing out an option for your own research.
Don't use their RocketRAID hardware RAID cards, those aren't support by TrueNAS, (SCALE or Core). Best to stick with LSI based cards.

Their are other small options. But, most would still need a LSI type SAS card with 4 or 8 lanes and external connector.
 

thegreek1

Dabbler
Joined
Sep 20, 2020
Messages
21
On the TrueNAS host? You don't. It's not designed for this. ZFS isn't designed for this. ZFS will generate crushing amounts of I/O out towards pool devices unless you have something like 10GbE or 25GbE direct to the shelf, and even then, it is not supported. The only attachment technologies supported by TrueNAS are SAS and SATA.

You want more storage, go out and get yourself a nice SAS disk shelf. You can add 12, 24, 36, 60, 90 hard drives that way without much trouble.
The p700 has 96GB of RAM but only allows for a max of 4 HDDs on it. I have my other main box a dl380 12x LFF and 256GM of RAM which I use for various items including testing of various apps. I was hoping to dedicate 3-4 HDD as additional storage / vault. The Server comes with its own 10GB card and all the drives on it are SAS. To create the fiber channel between the 2 would be easy and I have a switch that can support those speeds.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
To create the fiber channel between the 2 would be easy and I have a switch that can support those speeds.

TrueNAS also does not support fiber channel, at least not officially, and it is going to go very badly to try to use it for backend storage, because, just as with iSCSI, fiber channel generally doesn't have the needed speed. You really need something like 3Gbps per drive times the number of drives, so if you have a 12x LFF FC shelf then you need at least 32Gbps of FC HBA connectivity to support those drives.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
TrueNAS also does not support fiber channel, at least not officially, and it is going to go very badly to try to use it for backend storage, because, just as with iSCSI, fiber channel generally doesn't have the needed speed. You really need something like 3Gbps per drive times the number of drives, so if you have a 12x LFF FC shelf then you need at least 32Gbps of FC HBA connectivity to support those drives.
While I agree with you in principle, I’m sure you both know and have had to support this…there was a time when a lot of commercially available SANs had used that methodology to connect their disk shelves. So it’s not a crazy question to ask.

But alas that time is no more and your logic is sound. I just wanted to trigger the grinch because I’m sure there’s a good war story there.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I’m sure you both know and have had to support this…

Not with ZFS. I wouldn't try.

If you're really jonesing for some good old days of storage, I do have some really nice Mylex DAC960SX standalone SCSI array controllers from the 1990's (thousands of dollars each) and I can probably scrape together a variety of SCSI drives and we could make some nice high performance old style storage.

For the most part, I stayed away from fiber channel due to its relatively low performance. In the Usenet world, we mostly needed high bandwidth I/O, and for quite some years that was just multibus SCSI. Around the early 2000's, technology changed, pressures for less expensive storage increased, and so my shop was one of the first that produced a FreeBSD-based 24-LFF-in-4U storage server. We used this to provide scalable access to Usenet articles by message-ID, which suddenly meant that your spool could be 24 drives, or 264 drives (full rack), or even 792 drives (three full racks) and eventually went even beyond that. This was the beginning of the Usenet "retention wars" that saw massive investment in retention, eventually to a year or more at some providers. Now I do have a point to make here and it's that this was all relatively high performance I/O because each backend server was directly connected to its disks, and the frontends connected to the backend via ethernet, essentially making for a scalable fabric where any frontend could easily get any article from any backend. This absolutely effin' killed the guy in Atlanta... oh what was his name... Dwight at WebUsenet. He was running a big expensive SAN solution. and had hit the performance wall. Some other folks had other cobbled together solutions but no one had really done the work to do a distributed hash based system until I did it for Diablo, and this launched off the retention wars in the early-mid 2000's.

I much preferred to custom design entire systems based on inexpensive technology. FC was always expensive and also generally bad technology.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
If you're really jonesing for some good old days of storage, I do have some really nice Mylex DAC960SX standalone SCSI array controllers from the 1990's (thousands of dollars each) and I can probably scrape together a variety of SCSI drives and we could make some nice high performance old style storage.
Ha that's cool! Would love to see it some day!
Now I do have a point to make here and it's that this was all relatively high performance I/O because each backend server was directly connected to its disks, and the frontends connected to the backend via ethernet, essentially making for a scalable fabric where any frontend could easily get any article from any backend.
That's actually fascinating. What was the latency overhead between local and remote storage back in those days? Do you think LeftHand and EqualLogic and some of those other earlier clustering "scale-out" solutions were inspired by that deployment model?

This absolutely effin' killed the guy in Atlanta... oh what was his name... Dwight at WebUsenet. He was running a big expensive SAN solution. and had hit the performance wall. Some other folks had other cobbled together solutions but no one had really done the work to do a distributed hash based system until I did it for Diablo, and this launched off the retention wars in the early-mid 2000's.
So, more or less, the wall being high concurrency that lead to high contention, lots of random I/O and platter heads flipping every which way? All the while, the bottleneck was the FC connection to the shelves? :P
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's actually fascinating. What was the latency overhead between local and remote storage back in those days? Do you think LeftHand and EqualLogic and some of those other earlier clustering "scale-out" solutions were inspired by that deployment model?

I did give one presentation on the general design, and it is clear that many of the European Usenet providers implemented this or things similar to this. The bit that's interesting to me is that this was really an early implementation of object based storage, before anyone had really identified that as a thing.

So, more or less, the wall being high concurrency that lead to high contention, lots of random I/O and platter heads flipping every which way?

Well, there were numerous things killing performance. At the time, it was common for providers to use some external DAS solution and Newshosting was based on some external DAS Infortrend units (3U, 16 drives) and they had a RAID controller. Now on one hand that's great and all, but with a 64KB stripe size, and the average Usenet article being anywhere from 500KB to 1MB in size, reading a single article meant that you involved a bunch of drives to seek for that retrieval, and this sucked. The SAN based solutions used by other providers were also typically RAID5 based, usually with a better stripe size like 256KB, so this wasn't killing them quite as badly.

All the while, the bottleneck was the FC connection to the shelves? :P

That's unclear. As you know, bandwidth from a shelf decreases dramatically as the percentage of random I/O (meaning "requiring seeks" for everyone else in the audience here) increases. I'm sure some SAN vendors got rich from selling their crap to Usenet providers and I'm sure some providers may even have desperately paid for high bandwidth FC.

So let me outline this for you a bit. I rewrote Diablo (a provider-class Usenet transit and service package that we maintain here) which takes an incoming NNTP article stream and stores the articles on its spool drives. A spool server would have 24 independent UFS/FFS filesystems that were storing these, which meant that an article retrieval would only ever involve a single drive, which meant that you could have up to 24 concurrent retrievals going on without blocking (of course statistics mean it never quite worked out that awesomely, but half that was common).

You would then have 11 of these systems in a rack, stacked. You would then need to know where an article was going to be found, so I wrote a hashing algorithm and modified Diablo's spool access code to take a Message-ID, which would get run through MD5, and then you would take portions of the resulting MD5. Hash(Message-ID) would return an integer, in this example an int between 0-10, so that a front end would be able to predict that Message-ID <123456@freenas.org> would be on spool host 6. So you just ask spool host 6 for the article, spool host 6 then retrieves it from one of its 24 FFS filesystems, probably only using a single seek or maybe two.

Now if you're paying attention, you'll notice that without RAID of some sort, a disk failure would render parts of the overall spool irretrievable. We had several ways of dealing with that. One was that there's so much spool activity, you really needed two or three sets of spools for a large provider like Newshosting, so you just do multiple spools (redundant array of inexpensive servers!) and you also use a DIFFERENT portion of the MD5 bitfield for the hash for this second spool, so that a failure of a disk or a failure of a spool server doesn't result in a hot spot developing on the second spool. That way, <123456@freenas.org> hashes to spool host 6 on the first spool set but hashes to spool host 10 on the second spool set, and hashes to spool host 3 on the third spool set.

Eliminating seeks by knowing where you are likely to find stuff was one of the key technical elements that lit off the Usenet retention wars. Another was the move to inexpensive SATA mass storage, and then just an overall design that allowed for massive scalability. It's funny because I sometimes run across "discussion" threads on web forums or Reddit that has people talking about what they think happened, and much of it is just people talking out their arse about things where they know nothing about what was actually going on.

The Retention Wars were great. For some years, providers watched each other like a hawk and would try to out-retention each other. Because it was assumed that it was costly to add retention, if you had 1100 days of retention and your competitor expanded to 1400, the competitor sort of had an expectation that it would take you several hundred days to fill out that 300 days of additional retention if you added it. But one of my little NNTP extensions was MAXAGE, which would "hide" articles older than MAXAGE, and one of my clients used this to competitor deflating advantage by simply having all that storage ready to go and matching them the day after their retention increase announcement by just twiddling MAXAGE 300 days higher. This caused much consternation on some of the discussion forums as to how they had magically increased retention overnite as everyone THOUGHT it was a well understood mechanism as to how expanding retention worked.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
My coworker Wolfgang Zenker designed an email (POP3) storage system with a very similar architecture for German Telekom, then named T-Online.
 
Top