25 gig nic recommendations?

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
Does anyone have any recommendations or experience with 25 gig nics?

I’ve tried searching the forums and didn’t find anything. Also nothing listed in the hardware guide faster than 10 gig.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Does anyone have any recommendations or experience with 25 gig nics?

I’ve tried searching the forums and didn’t find anything. Also nothing listed in the hardware guide faster than 10 gig.

10G/40G are going to be related due to the "Q" ("quad") in QSFP, so cards like the Chelsio 580 are expected to work because of the 520 support. It's the same driver.

That's not directly helpful to your question, but just bear with for a moment.

What's happened in the networking world is that there's been a lot of failure to make the jump to the next order of magnitude; we went from 10M-100M-1G-10G over the period of about a decade back in the '90's(ish), but sufficiency set in and copper topped out for nearly two decades at 1G, even with "10G" being sort of available this last decade, and manufacturers now trying to push 2.5G/5G because they came up with a "compelling" argument for it. On the data center side of things, a similar thing happened with 10G/40G, 40 being the "quad" 4x10G, and there hadn't been a terrible push towards faster. 10/40 was king for like a decade.

iXsystems has relied heavily on Chelsio in the past. I don't know what they're currently shipping, but I would note that there is a Chelsio T6225-CR card which is powered by the same driver that runs the 520/580. This also comes in a 100G variant, which could come in handy if you needed multiple 25G support.

The baseline there has finally started to move again with the evolution from PCIe 2 to 3 and 4, and Intel's XXV710 based cards were released about 5 years ago, but we didn't see a "quad-ification" of that product line, probably as a practical matter of PCIe performance. I would *guess* that the XXV710 is the next best path forward if you just want basic 25GbE.

However, seeing as how all the cheap used gear isn't 25G yet, I don't have firsthand info on compatibility here. This is all sort of theoretical.
 

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
10G/40G are going to be related due to the "Q" ("quad") in QSFP, so cards like the Chelsio 580 are expected to work because of the 520 support. It's the same driver.

That's not directly helpful to your question, but just bear with for a moment.

What's happened in the networking world is that there's been a lot of failure to make the jump to the next order of magnitude; we went from 10M-100M-1G-10G over the period of about a decade back in the '90's(ish), but sufficiency set in and copper topped out for nearly two decades at 1G, even with "10G" being sort of available this last decade, and manufacturers now trying to push 2.5G/5G because they came up with a "compelling" argument for it. On the data center side of things, a similar thing happened with 10G/40G, 40 being the "quad" 4x10G, and there hadn't been a terrible push towards faster. 10/40 was king for like a decade.

iXsystems has relied heavily on Chelsio in the past. I don't know what they're currently shipping, but I would note that there is a Chelsio T6225-CR card which is powered by the same driver that runs the 520/580. This also comes in a 100G variant, which could come in handy if you needed multiple 25G support.

The baseline there has finally started to move again with the evolution from PCIe 2 to 3 and 4, and Intel's XXV710 based cards were released about 5 years ago, but we didn't see a "quad-ification" of that product line, probably as a practical matter of PCIe performance. I would *guess* that the XXV710 is the next best path forward if you just want basic 25GbE.

However, seeing as how all the cheap used gear isn't 25G yet, I don't have firsthand info on compatibility here. This is all sort of theoretical.
Thanks jgreco. We're upgrading our backbone switches and i'm in a time crunch to get things ordered. E-Rate will fund 60% of the cost of our datacenter switches and I need to have what we're getting nailed down by Monday.

I didn't have much time to prepare because I didn't know we could afford the 40% we'd be on the hook for until a few days ago. So we have the funds to replace our TOR switch in our server rack at a HUGE cost savings. We've been using Juniper stuff forever and really like it. From what i've been reading online people (were) saying 25 gig to the servers, 40 or 100 gig to interconnect switches.

Seemed to make sense. It's not that we NEED 25 gig yet. But i'm still looking to build that all NVME TrueNAS box for VM storage. I'm hoping that will be able to overfill the 10 gig connections if needed. So 25 gig seemed logical.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Fun, fun.

It isn't terrible to plan for the future. This isn't like the '90's where we went through the awful 10M-100M-1G-10G cycle in a scant dozen years. Modern network planning has reached a point where 1G for endpoints has been sufficient for years, and while vendors are anxious to upsell, and even if it's nice to have 2.5G/5G to endpoints, average utilization doesn't seem to be growing that fast. The number of potential endpoints, with the proliferation of Internet-connected devices, is the biggest network planning challenge I'm guessing you're facing...?

Communications between servers is a different thing, though. You can blow through capacity very quickly depending on what you are doing. If you are deploying 25G to servers, be very careful about inter-rack decisions; 40G is not even 2x25. If your hypervisors and storage are all in the same rack, perhaps that's fine. But if you're doing storage in one rack and hypervisors in another, 100G or even points north start lookin' mighty fine.

The advent of 100G server cards in the last few years shows just how far things have evolved. It's important to bear in mind that these are all based off of 25G/100G (QSFP28) so definitely suggest trying to avoid any 10G/40G-limited stuff.

Don't forget that getting a dual port 25G card effectively gives you 50G of LACP capacity, and stuff like that. :smile:
 
Top