10GbE NIC for FreeNAS: Intel XXV710-DA2 or Chelsio T520-SO-CR

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
I'm a novice trying to remedy potential bottlenecks in a FreeNAS system that's taken far too much time to do.
Previously, I bought whichever 10Gb SFP+ cards I thought to be compatible, but now realize there's more to it.
One suggestion was even whether my 'CPU was adequate' ... which seemed bizarre,
I now appreciate the CPU suggestion... which is why LSO & LRO (if affordable) seem wise. It reduces variables...

Given my situation ...which is the 'superior' choice and value NIC:

• Runs cooler under the same workload (ostensibly, use fewer Watts).
I believe both have LSO and LRO, though Intel uses different nomenclature on their product info...
(Large Receive Offload & Large Send Offload) ...

If both NICs were the same price, which would you purchase..?
Is there another NIC that you'd recommend over either of the two..?

I intend to use DAC cables ...
The Intel model I was looking at is an HPE OEM card... with that in mind,

Would either require special considerations for the 'transceiver' ...?
(Do DACs refer to the 'terminators' as transceivers, also..? )

If either are more flexible as far as transceiver compatibility, that could be a difference that saves me $$ ... overall.

Thanks
 
Joined
Dec 29, 2014
Messages
1,135
Following the recommendations of @jgreco and others, I would warn away from DAC cables. If both sides of the link aren't from the same manufacturer, the DAC cables may not work. I also find the that fiber optic cables are much easier to route in racks and cable management. As far as the NIC's go, I can say that the T520's worked great for me. I know others have had good luck with the Intel cards.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
@Elliot Dierksen makes a good point. But if you're interested in reducing power consumption, DAC cables are the way to go.

You can always order a DAC from the FibreStore -- email them and tell them what you want; they will make it:

I ordered a DAC from them several years ago, specifying a Dell PowerConnect 5524P switch on one end and an Intel X520-DA2 NIC on the other. They fixed me up.

Some switches are more fussy than others about DAC cables, and transceivers, too, if you go the fiber route. My Brocade/Ruckus switches don't care -- they will accept any transceiver or DAC cable. Cisco, HP, etc., may not be so forgiving.
 
Joined
Dec 29, 2014
Messages
1,135
My Brocade/Ruckus switches don't care -- they will accept any transceiver or DAC cable. Cisco, HP, etc., may not be so forgiving
That is certainly my experience as well. In Cisco-land, you can get around that with the undocumented command
Code:
service unsupported-transceiver
I never found anything comparable to that in HP-land including the H3C models that are re-branded as HP.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
I have these 3 types of NICs for a couple of years inside different FreeNAS servers
  • Chelsio Communications Inc T520-SO Unified Wire Ethernet Controller
  • Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
  • Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) (Intel X520)
With the last version of FreeNAS, all these NICs run very well without visible differences, because in my case the bottleneck is always the spinning disks.

Other arguments to consider:
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
First things first ... what's a good price to pay for Chelsio cards... or at least a reasonable price.?
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
Wonderful info! Thank you all very much.

Re: DAC compatibility ... I'm using a consumer brand
D-Link DXS-1210-12SC 12-Port 10G SFP+ ... with 2x 1GbE RJ-45 ports ...

Is any aspect of compatibility a function of the cable's-specs or manufacturer.?
Is the NIC what requires certain components for compatibility ..?
Does the Switch influence or require aspects of compatibility ..?

Can the connection between a Switch and 2 NICs be effected by the Transceiver or Cable and the Switch..? as in ...
Can NIC-2 be effected by the Transceiver used by NIC-1 if there's a switch between them..?

What asking is ... what all 'cares' about the transceiver used ..? The NIC ..? Or the Switch ..? Or both..?
What if you went NIC-to-NIC, bypassing the switch. Does the transceiver matter then ..?

And ... How would I know if a cable or transceiver were rejected ..?
Would the port not light up..?
Or do manufacturer's make diagnosing this difficult with ambiguous or poor performance..?


I'm worried that how I treat equipment (I routinely need to move things to diagnose or swap things out) would
make components that're ultra sensitive (like Fiber cables) a very bad choice for me.

I'd wind up violating the minimum radius or pull too hard on the cable if something didn't reach.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
That is certainly my experience as well. In Cisco-land, you can get around that with the undocumented command
Code:
service unsupported-transceiver
I never found anything comparable to that in HP-land including the H3C models that are re-branded as HP.


AH !!! I see!!!! So this is based on the SWITCH imposing the Transceiver's compatibility or restrictions...?? NOT The NIC ..?

As mentioned, I have a
D-Link DXS-1210-12SC 12-Port 10G SFP+ ... with 2x 1GbE RJ-45 ports ...

It's worked with every card (unless that's why I can't get 10Gb working on my FreeNAS)
I think the DAC cables I used were Cisco ... but I couldn't find a label on them.
There are stickers but I don't think decode would reveal a model or anything.
...with Chelsio, Myricom and ATTO hardware.

IF it's based on the switch..? I'm assuming consumer gear makes it way less of an issue.

The only reason I would sell this switch and replace it is if IT could be why I'm getting such crappy throughput.

I'm assuming you guys are all networking experts ... (I'm not even a beginner) ... how big of a problem is it if I'm using a Layer 3 switch ... for 10GbE and my Airport Routers for routing (Extreme and an Express) ... but using only one subnet despite connecting both to the FreeNAS server..?


I do NOT KNOW how ... And even if I managed to do so, it'd still be an L3 switch which can't issue IPs anyway, right..?


Thanks again for everyone's help. Truly.

The WORST kind of tech problem ... is when it "kinda works" ... because you can't just replace parts to find a solution.

I'm trying to not drop this problem on you guys. I purchased the Chelsio cards ... and maybe they'll just "solve" things.
If NOT ... okay, fine, I'll start trying to provide all the symptoms. But I don't want to waste people's time if it's solved by just getting decent cards.
Still ... if you want to know some of those details...the bottom paragraph is a partial summary of symptoms, but again -- hopefully spending [some] money (not much, I hope) fixes it.


I've been so confused about where to start on this that THREE YEARS have ticked off with me throwing my arms up in the air...
I have at times succeeded at getting 10Gb to work but it was still unable to cross 200MB/s ... but now I can't even get it to work.
I think it's because I have stuff that's old, some that are incompatible, and that which did work (I'm thinking either an ATTO card or a different Chelsio card worked in it ... most likely another Chelsio i have) ... but as I said, even when I was able to communicate with my FreeNAS when only the 10GbE was connected (right now, even though it says the 10Gb IS connected if I disconnect my 1GbE it "stops" working - but I'm thinking it doesn't work at all) ... but as I said, even when it did work, it was terrible... which is why I'm trying to solve this.
 
Joined
Dec 29, 2014
Messages
1,135
AH !!! I see!!!! So this is based on the SWITCH imposing the Transceiver's compatibility or restrictions...?? NOT The NIC ..?
Yes, exactly. There may be ways to get around it on the card, but I am not aware of those. The Chelsio cards have worked very well for me, but they were quite picky about the optics in 10G. Things appear much more tolerant in 40G which surprises me. I don't have much experience with 40G which was why I wanted to mess with it in my home/lab network. 10G is very mainstream now.
IF it's based on the switch..? I'm assuming consumer gear makes it way less of an issue.
I don't know that I would make that assumption. I was helping a friend with his Ubiquiti switch, and it was quite particular about the DAC cables.
The WORST kind of tech problem ... is when it "kinda works" ... because you can't just replace parts to find a solution.
Indeed. I fought with the 40G stuff in my network for months before finally being able to isolate that one of the NIC's wasn't working quite right. It mostly worked, which was absolutely maddening!
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
AH !!! I see!!!! So this is based on the SWITCH imposing the Transceiver's compatibility or restrictions...?? NOT The NIC ..?
It's both. Your transceivers need to match up on both ends of the cable: the switch end and the NIC end. Some vendors impose 'lock in' - which means their gear only works with a few transceivers, their own, or their own and perhaps their business partners. But your switch needs to work with the transceiver you plug into it, and your NIC needs to work with the transceiver you plug into it.

This is why you sometimes need different transceivers on each end of the cable.
I'm assuming you guys are all networking experts ... (I'm not even a beginner) ... how big of a problem is it if I'm using a Layer 3 switch ... for 10GbE and my Airport Routers for routing (Extreme and an Express) ... but using only one subnet despite connecting both to the FreeNAS server..?

I do NOT KNOW how ... And even if I managed to do so, it'd still be an L3 switch which can't issue IPs anyway, right..?
Routing takes place at layer 3, where you send packets to a different subnet. Traditional layer 2 switching just sends packets to a switch port based on the destination MAC address. Chances are you're using your D-Link DXS-1210-12SC as a layer 2 switch.

How have you configured your Airport Extreme and Airport Express devices? Is one of them your internet gateway and the other simply a wireless access point? Or do you have a dual-WAN setup? In other words: two different ISPs?

You said you connect both to your FreeNAS server, which implies that it has two IP addresses. What was your goal with this setup?

Connecting the two Airport devices to your FreeNAS server is probably the cause of your network problems. It should only need one network connection.

Thanks again for everyone's help. Truly.

The WORST kind of tech problem ... is when it "kinda works" ... because you can't just replace parts to find a solution.

I'm trying to not drop this problem on you guys. I purchased the Chelsio cards ... and maybe they'll just "solve" things.
If NOT ... okay, fine, I'll start trying to provide all the symptoms. But I don't want to waste people's time if it's solved by just getting decent cards.
Still ... if you want to know some of those details...the bottom paragraph is a partial summary of symptoms, but again -- hopefully spending [some] money (not much, I hope) fixes it.


I've been so confused about where to start on this that THREE YEARS have ticked off with me throwing my arms up in the air...
I have at times succeeded at getting 10Gb to work but it was still unable to cross 200MB/s ... but now I can't even get it to work.
I think it's because I have stuff that's old, some that are incompatible, and that which did work (I'm thinking either an ATTO card or a different Chelsio card worked in it ... most likely another Chelsio i have) ... but as I said, even when I was able to communicate with my FreeNAS when only the 10GbE was connected (right now, even though it says the 10Gb IS connected if I disconnect my 1GbE it "stops" working - but I'm thinking it doesn't work at all) ... but as I said, even when it did work, it was terrible... which is why I'm trying to solve this.
Sadly, upgrading to 10Gb doesn't mean that you're going to get transfer rates anywhere close to 10Gb line rates -- even on other machines equipped with 10Gb. You'll find out that your bottleneck has simply moved from your network to (most likely) your disk speed. Still, you should easily saturate multiple 1Gb connections, and with decent hardware you should get something on the order of 400MB/s or greater on devices connected with 10Gb.

The real benefit of the higher bandwidth is that you can share it among multiple simultaneous connections. You and your family can all connect via wireless or 1Gb LAN connections and still get decent transfer rates. The data pipe is bigger.
 
Last edited:

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
...This is why you sometimes need different transceivers on each end of the cable.

Wow, really..? like, different mfr transceivers on each end..? That I def. didn't know.

Routing takes place at layer 3, where you send packets to a different subnet. Traditional layer 2 switching just sends packets to a switch port based on the destination MAC address. Chances are you're using your D-Link DXS-1210-12SC as a layer 2 switch.

That's what I meant to say ... layer 2. Off by one error. Thanks ... as I said, I suck at networking.
(Yes, I know, that only applies to programmers, but still.) :)


How have you configured your ...devices? ...one as internet gateway & the other a wireless access point? Or dual-WAN? (...two ISPs?)

Definitely a single ISP, one router and one repeater, which is how those two products are typically used. Here's the setup:


You said you connect both to your FreeNAS server, which implies that it has two IP addresses. What was your goal with this setup?

Please see attached pictures of Airport setup (which will be changing very soon).

Actually it has 3; there're two Gig-E ports on the Dell server (one of which provides a whole 100Mb and that connection is lost if I disconnect the 1Gb connection, anyway...(WEIRDLY) The other is the 10Gb, from which, as I do data recovery, I need to make work properly. If I recover a RAID setup, it gets very clunky trying to connect 8 drives to my machine with the expensive hardware (it's close to $20,000 for the PC-3000 Express card with RAID and add-ons, and when they release an NVMe add-on I expect it will be more than $20,000.

I'd like to make DD images of working drives to my NAS of the known-good (not repaired drives which may have gone off-line out of sync, pending on which level of parity it is and when the repaired drive originally went offline relative to the set, but that's germane to this question) ... obviously these configurations do NOT have to "merely" consist of 3-6 drives... but could be 30 drives! The ability to host the images for the software I'd use from something 'quick' could reduce the time by 10 or more days. 10+ days of precariously strewn drives, consuming expensive resources, delaying services rendered ... That's independent from the fact that most of my data is on NVMe storage. The ONLY spinning drives I use for ANYTHING...are in arrays of 8x 7200rpm drives, and that's it. Of which, I do now have 2x x16 HBA designed to host 4x NVMe x4 drives, each. Of which, I have 4x 4TB NVMe drives, which I may replace with 8TB NVMe drives... and pickup 8 of them as I'm able to grab those for good prices... but that will be the 2nd project I begin after I get this array of 8 drives worked out.

As far as whether 8x 7200rpm drives not 'saturating 10GbE' ... 2 things: 1. They wouldn't have to; they need only exceed 1GbE for it to warrant using a much larger pipe, no..? And 2 ...I had started to "concede that maybe spinning drives in decent levels of parity will be super slow ...to which, every one, and I mean EVERYONE reiterated that 200-300MB/s means there's something WRONG with my setup. Especially if it's 1 or 2 transfers ... either of media (personal use) ... a sparse image (personal use) ... or DD image (business uses) ... as all of those are seen by drives and controllers as a single file; irrespective of the data they're comprised.

People with 4x 7200rpm drives say they get 600+ MB/s ... and 8x get 800+ MB/s in RAIDZ-2 arrays... and while even 300MB/s would warrant switching to a faster network, as I move data between peers on NVMe drives... (seriously, I have about 15 drives ... that are 1-2TB NVMe and about 30-40 between 128GB - 512GB AHCI and NVMe from MacBook Pros from my retail store ... upgrading drives for clients, etc. ... I NEVER use spinning drives except as targets, for client-data as they don't share my philosophy, etc.)

Ultimately, I DO expect (and at LEAST ... hope) to get ~800MB/s on my RAIDZ2 of 8x 10TB SAS-2 7200rpm drives... and have 4x Dell T320 machines to setup for me or customers who've expressed interest in them if I figure out how to get them working properly, also. And ... for my NVMe..? That will depend on what I figure out to be required to keep up ... and may switch to SFP28 for those machines (and buy a switch accordingly, used telecom gear, as I expect it'll be necessary to get close but without the high wattage of older gen switches or NICs).

Setup in RAID-1 just to test the 4x NVMe drives on my HBA (Highpoint SSD7120) ... I got 5.5GB/s ... which, as each drive individually gets about 3GB/s, doesn't even seem that high...and comprised of 8x ...? I'd imagine would have no problem saturating QSFP+ ... no?


Connecting 2x Airport devices w FreeNAS is probably the cause of your network problems. It should only need one network connection.

I'm assuming this may have been predicated off a given which isn't a given ... but also, I am planning to replace my AP Ex with another 802.11ac router which has 8x GbE connections ... but was going to ask you about that as well, as it seems you def. have the expertise and have been very generous with it:

I was considering the Netgear R9000 X10 ... which has 8 ports and an SFP+ port. With all of those ports I obviously won't need to extend my router... (which was part of why I had extended it). And I'm assuming that SFP+ port was designed with the presumption the router would handle issuing IPs and a layer 2 switch (thank you again for correcting my nomenclature) would handle port/MAC routing.



..upgrading to 10Gb ≠ transfer rates ~10Gb line rates...your bottleneck has moved from the network to disk speed...but, you should saturate 1GbE up to 400+ MB/s or greater via 10Gb...& multiple simultaneous connections 1Gb LAN, & still get decent transfer rates due to a bigger pipe.

Re: spinning disks, assuming that's the basis of the bottleneck of the assumption // system we were speaking of above:
Does your synopsis of how my expectations should revise still hold..?
If so, are there other steps (without having to spend ridiculous money) which would yield closer to what others state their networks get..?

I'm VERY VERY grateful for all of your help. Truly. I hope none of my wording seems smug or arrogant. And if anything did, it's unintentional and unfortunate. :) I know how ignorant I am in this realm. :)
 

Attachments

  • 1597947499549.png
    1597947499549.png
    3.3 KB · Views: 283
  • Extreme.png
    Extreme.png
    619.3 KB · Views: 323
  • Extender.png
    Extender.png
    578.3 KB · Views: 314
  • Express - Networking off.png
    Express - Networking off.png
    648.8 KB · Views: 291

kspare

Guru
Joined
Feb 19, 2015
Messages
508
One advantage with DAC is also the potential for lower latency. There is no light to electical conversion to take place.

We strictly run DAC cables, we just use cisco cables for our 40gb cards and Fiber Store 10gb cables for all our 10gb cards...

DAC cables are as thin as a cat5 cable....

Not sure why people are giving them such a bad rap.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
Chelsio cards have worked very well for me, (they're quite picky about 10G optics)
40G appears more tolerant (which surprises me).

Apparently the cables I have are already correct:
All my NICs & the Switch are all Cisco compatible.
Truly though ... AMAZING level of service.
They asked engineers!! My questions. lol.
I will definitely order from them any time I have an excuse to in the future.

I was originally tempted to SFP28 NICs ... as I'm working towards an NVMe FreeNAS setup...

I think the deal I got on 2x Chelsio T520-SO-CR @ $120 ea were too good to pass up
More importantly, I hope it facilitates whittling down the causes for my bottleneck(s). (I HOPE!)...


Took months to isolate the issue in a 40G network that mostly worked; (absolutely maddening)!

Yes, it's freaking RIDICULOUS! lol. Why the hell!?




I was swamped yesterday, I'm sorry I couldn't reply until today. Thank you SO very much...

(FYI, when I tried deleting the message it'd open/close/open/close...
Forcing a race of how quickly I could move my mouse to hit the 2nd delete button. lol.)
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
One advantage with DAC is also the potential for lower latency. There is no light to electical conversion to take place.

We strictly run DAC cables, we just use cisco cables for our 40gb cards and Fiber Store 10gb cables for all our 10gb cards...

DAC cables are as thin as a cat5 cable....

Not sure why people are giving them such a bad rap.

GREAT info. I would've thought just the opposite but now that you say why, makes total sense. Thank you.
(Inoculated another of my fallacious perceptions. :) )
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Apparently the cables I have are already correct:
All my NICs & the Switch are all Cisco compatible.
Truly though ... AMAZING level of service.
They asked engineers!! My questions. lol.
I will definitely order from them any time I have an excuse to in the future.

I was originally tempted to SFP28 NICs ... as I'm working towards an NVMe FreeNAS setup...

I think the deal I got on 2x Chelsio T520-SO-CR @ $120 ea were too good to pass up
More importantly, I hope it facilitates whittling down the causes for my bottleneck(s). (I HOPE!)...




Yes, it's freaking RIDICULOUS! lol. Why the hell!?




I was swamped yesterday, I'm sorry I couldn't reply until today. Thank you SO very much...

(FYI, when I tried deleting the message it'd open/close/open/close...
Forcing a race of how quickly I could move my mouse to hit the 2nd delete button. lol.)

I've found my 10gb cards to be more compatible. My Chelsios cards and my cisco nexus 40gb switch only worked with cisco cables.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
I've found my 10gb cards to be more compatible. My Chelsios cards and my cisco nexus 40gb switch only worked with cisco cables.

I think I was unclear about mentioning that the compatibility wasn't my opinion but rather the response of the employees at the FibreStore ... who replied saying Cisco cables would be optimal for ALL of the NICs which I provided them the list of...which included the

Chelsio (old and new)
Myricom
ATTO

... as far as my opinion went, I was who was worried they weren't optimal. :)
Is that what you thought I'd said or was I right in thinking I was ambiguous about that.?
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I think I was unclear about mentioning that the compatibility wasn't my opinion but rather the response of the employees at the FibreStore ... who replied saying Cisco cables would be optimal for ALL of the NICs which I provided them the list of...which included the

Chelsio (old and new)
Myricom
ATTO

... as far as my opinion went, I was who was worried they weren't optimal. :)
Is that what you thought I'd said or was I right in thinking I was ambiguous about that.?
I'm just more commenting on my experience as well lol.

I couldn't use FS Cisco compatbible cables in my 40gb...I had to use genuine cisco cables....kinda odd. But I could use FS 10gb fan cables from a 40gb port.....go figure...basically making 4 10gb cables from 1 40gb port.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
... Sadly, upgrading to 10Gb doesn't mean that you're going to get transfer rates anywhere close to 10Gb line rates -- even on other machines equipped with 10Gb....
Allow me to quibble a little. I wholly agree that moving to a 10GbE data pipe will likely uncover the next slowest link in the chain from the data on the NAS disk to your computer.

However, in my experience, the lower latency associated with 10GbE is noticeable for tasks like rsync where a hot L2ARC can significantly boost metadata throughout and hence significantly shorten the times it takes for my NAS content to get backed up. This metadata exchanges at pretty low rates (in the 10’s of MB/s if that) so it’s not a question of pipe width. Perhaps rsync is very inefficient re how much metadata it requests?

With the advent of FreeNAS12.x, special VDEVs have the potential to significantly boost system responsiveness, making 10GbE attractive even in a single-VDEV pool. Ditto persistent L2ARCs. Granted, as usual the use case is important and not every application will benefit from special VDEVs.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
I'm just more commenting on my experience as well lol.

I couldn't use FS Cisco compatbible cables in my 40gb...I had to use genuine cisco cables....kinda odd. But I could use FS 10gb fan cables from a 40gb port.....go figure...basically making 4 10gb cables from 1 40gb port.

Oh oh oh. I see. Cisco compatible vs. Cisco-OEM...

You know there's a tool (pretty cheap) that lets you RE-WRITE the code on transceivers..?
DEF cheaper than how expensive those cables get up to.
 
Top