U.2 HBA Recommendations

Joined
May 5, 2021
Messages
2
I didn't see anything on here about whether or not Highpoint (or similar) HBAs were supported.

I'm looking' for recommendations for one so I can I use the drives I got.

secondly. what is the recommended HBA for U.2 NVMe drives?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
In general, very few non-LSI based HBAs are suitable for TrueNAS.

How many U.2 NVMe drives are you planning to use?

Some of the latest LSI HBAs are Tri-Mode, SATA, SAS & NVMe. U.2 NVMe drive bays are generally wired for 4 lanes of PCIe, and the Tri-Mode adapters can multiplex, (aka slow down), NVMe drives to support more of them, than a PCIe slot has bandwidth. Like 4 x 4 PCIe lane NVMe drives on 8 PCIe lane LSI Tri-Mode controller. Here is an example;

If you just want to support 2 - 4, (you list "U.2 NVMe drives", plural), then a simple PCIe card with connectors for cables to your U.2 NVMe drive back plane will work.

If you need lots and don't have the PCIe slots for them, you probably need something more significant, for example;
This beasty supports;
Up to 8 x4, 16 x2, or 32 x1 NVMe SSD connections
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I didn't see anything on here about whether or not Highpoint (or similar) HBAs were supported.
I mean, the first four hits I get on Google for "TrueNAS Highpoint" range from "yeah, this is a lot of trouble" to "Nope."
secondly. what is the recommended HBA for U.2 NVMe drives?
None. Sure, you could buy LSI Tri-Mode HBAs and Tri-Mode expanders, if you wanted SAS performance from NVMe drives with USB reliability, all at prices not even clueless enterprise customers would pay. I doubt that sounds appealing to you. Then again, Tri-Mode only really works with U.3 drives, for U.2 PCIe is wired separately from SAS/SATA.

Fortunately, PCIe brings a lot of sanity to SSDs, so you might want to provide more details so someone can suggest something more specific.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
I'm going to get flamed here. But I swear, I am not crazy!
Everything I have read about Tri-Mode adapters is that they are quirky at best. This thread has some very good information on it

I've had very good luck with this particular vendor and manufacturer on AliExpress for this purpose

They sell it with cables too, if you don't have a backplane to support it

Get a card that doesn't require PCI-E Bifurcation like that, but just natively switches PCI-E instead of going through a Tri-Mode.
Read here: They are DRIVERLESS, which makes them just appear as another PCI-E device and gets them closer to the CPU.

Going above PCI-E Gen 3 will require some more nuance.
 

DenisInternet

Dabbler
Joined
Jun 14, 2022
Messages
28
I'm going to get flamed here. But I swear, I am not crazy!
Everything I have read about Tri-Mode adapters is that they are quirky at best. This thread has some very good information on it

I've had very good luck with this particular vendor and manufacturer on AliExpress for this purpose

They sell it with cables too, if you don't have a backplane to support it

Get a card that doesn't require PCI-E Bifurcation like that, but just natively switches PCI-E instead of going through a Tri-Mode.
Read here: They are DRIVERLESS, which makes them just appear as another PCI-E device and gets them closer to the CPU.

Going above PCI-E Gen 3 will require some more nuance.
Sorry for replying to an old post, but does the PLX chip on one of these cards bottle neck the NVME speeds?
I am currently using a 9620-16i with 4 NVMe drives, the card is quite expensive new but it works well. I did notice that the HBA does create a layer between the NVMEs and the CPU which is not ideal, I am not sure if this affects performance. If the cards you mentioned are reliable and don't bottleneck speeds that would be a huge savings.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Sorry for replying to an old post, but does the PLX chip on one of these cards bottle neck the NVME speeds?
I am currently using a 9620-16i with 4 NVMe drives, the card is quite expensive new but it works well. I did notice that the HBA does create a layer between the NVMEs and the CPU which is not ideal, I am not sure if this affects performance. If the cards you mentioned are reliable and don't bottleneck speeds that would be a huge savings.
The PLX chip plenty of lanes:
 

njhilliard

Cadet
Joined
Mar 16, 2024
Messages
5
I have used DiViling and Linkreal u.2 pcie3 x16 with 8 separate miniSAS hookups to u.2 drives. Most cables that go directly to your u.2 drives are a problem which makes my system freeze or lock up. I have found using Icy Dock backplane with either 6 bay or 4 bay 5.25 and slimline Sas to miniSas on the HBA card works.

I have tried to LSI/Broadcom 9600-24i Pcie4 gen card and it was recognized with lspci linux command, its just none of the u.2 drives were recognized. Dont know if this is a pcie3 motherboard problem with a pcie4 HBA card or what. Would love some opinions on this. I thought they should be backwards compatible.

On the other hand the LSI/Broadcom 9305-24i HBA card is recognized and works great with Sata drives (using cheap Leven ssd and some nicer Samsung ssd sata drives).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Most cables that go directly to your u.2 drives are a problem which makes my system freeze or lock up
Sounds like you need retimers in there, rather than a purely passive setup. Or a PCIe switch, depending on what you're trying to do, exactly.
tried to LSI/Broadcom 9600-24i
Pure masochism. Don't.
its just none of the u.2 drives were recognized.
You need the correct cables (expensive and hard to get right) and you need the card to be supported - it's in the "TBD" bucket for now.
Dont know if this is a pcie3 motherboard problem with a pcie4 HBA card
No, at least it's not likely.
 

njhilliard

Cadet
Joined
Mar 16, 2024
Messages
5
Eric
This is what I am trying to do. Got lucky enough to buy 8 15.36 TB Intel u.2 pcie3 hard drives and 2 30.72 u.2 pcie4 hard drives from enterprise servers used with half life on them. Ill never read write enough to use all of their life unless they just out right malfunction. But since I have them, trying to hook them up in one server. So I have a z71 thermaltake with some Icy dock u.2 backplane 5.25 bays and have tried direct connection to Linkreal 8 Port U.2 to PCI Express x16 SFF-8639 NVMe SSD Adapter with SFF-8643 Mini-SAS HD 36 Pin Connector and PLX8749 chipset for Servers with minisas. Initially it worked running some directly to u.2 hard drives with minisas (sff8643) to sff8639 cabling. Then the ones which work without locking my pc up are the IcyDock connected ones. The ones which are direct cabling (ie not in the icy dock but mounted in the case) freeze up my pc running kubuntu. It really didnt do that while running proxmox. I think some of my cables are bad. So next attempt is to put them all in Icy Dock. Does that make sense?
 

njhilliard

Cadet
Joined
Mar 16, 2024
Messages
5
So ill be getting 3 8i Slimline Sas cable to minisas to hook them up using the Icy Docks. Still waiting on new cables to get here. Any btw, it doesnt matter who you buy thru (Amazon, Newegg, Ebay) they all come from China and takes at least 2-3 weeks to get them.
 

njhilliard

Cadet
Joined
Mar 16, 2024
Messages
5
I have fun tinkering with this stuff, watching Level1 tech, Craft computing etc. I have figured out more of what doesnt work than what actually does work
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The IcyDock cages are known to be a bit on the dodgy side, although I can't really understand why they're as dodgy as people report them to be.
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
The IcyDock cages are known to be a bit on the dodgy side, although I can't really understand why they're as dodgy as people report them to be.
It's the attempt at Tri-Mode support that causes problems.

I have the MB699VP-B and I just connect the SFF-8643 from my motherboard or add-in cards straight to the back, and it works great. Drives run a bit hot, but I replaced the fans with 20mm thick ones (I picked Noctua, but any 20mm thick will do much better than the 10mm thick fans the dock ships with). The V2 version of the same dock uses Oculink instead, and it works fine if you run Oculink without any conversion cables (just like mine has issues if you try to Oculink to SFF-8643).

It's the V3 version of the dock that adds Tri-Mode support, and that's when everything pretty much breaks.

The problem is that when shopping for deals, "MB699VP-B" doesn't tell you what you will get, and people head towards the "V3" because they think it is improved in some way over the previous versions. So, in addition to Tri-Mode being a bad thing, Icy Dock isn't doing buyers any favors with their naming convention.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Most cables that go directly to your u.2 drives are a problem which makes my system freeze or lock up. I have found using Icy Dock backplane with either 6 bay or 4 bay 5.25 and slimline Sas to miniSas on the HBA card works.
Mind that SAS/SATA expects 100 Ohm cables but PCIe expects 85 Ohm cables, so you indeed need to find the right cables.

I have a 16*2.5" SATA IcyDock with 4 SFF-8643 inputs. On my first tries some bays—always the fourth port in a set—were "missing". Troubleshooting revealed the problem followed some, but not all, MiniSAS HD cables between the dock and 9305-16i HBA; replacing the offenders with designated 100 Ohm cables solved the issue.
I can imagine a frustrated user putting the blame on the dock rather than on poor overall labelling of cables and SFF ports which potentially have dual use.
 
Top