USB drives for booting

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
I would like to emphasize that the reason behind this discussion, is the problem.

It is so so incredibly stupid that you need to spend a whole disk and related an disk interface for the boot device. Where even a small disk is nowadays at least 10 times bigger what is needed as boot partition !!

And yes I agree it is not optimal to use an usb-disk as boot device, but please take the reason for that away !!!
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
I would like to emphasize that the reason behind this discussion, is the problem.

It is so so incredibly stupid that you need to spend a whole disk and related an disk interface for the boot device. Where even a small disk is nowadays at least 10 times bigger what is needed as boot partition !!

And yes I agree it is not optimal to use an usb-disk as boot device, but please take the reason for that away !!!
This discussion doesn't have anything to do with with the boot device - it relates only to USB-attached datapool disks (see the subject/title).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It is so so incredibly stupid that you need to spend a whole disk and related an disk interface for the boot device.
I fundamentally disagree. It would be incredibly stupid to waste time and massively* increase the maintenance burden on everyone to support a niche use case of no value to paying customers and limited value to the wider community. A basic NVMe SSD costs less than 35 bucks with no special deals, frequently dipping below 30 bucks with deals.

Anyone who really wants this is free to hack their system to their liking (and some people do!), because it is their system.

Edit:
* The massive increase is not from code maintenance, but from recovering for the many additional foot-shooting incidents that would start showing up here.
 
Last edited:

rvassar

Guru
Joined
May 2, 2018
Messages
972
A basic NVMe SSD costs less than 35 bucks with no special deals, frequently dipping below 30 bucks with deals.

Actually, less than half that. The short 2230's are about the same price as a quality thumb drive. Ala:


As far as I know you can still boot from USB thumb drives. I don't do it because I kept burning them out. Even mirrored, I got sick of the maintenance & $$ lost. But I was using cheap thumb drives. I may revisit it when I move to Scale, to free up ports, but I'll be sure to use better quality devices.

But again, the guidance against USB is for datastore drives, and is clearly stated in the title of this thread.
 

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
Whatever, the problem is NOT COST. The problem is that a normal pc has a very limited number of harddisk interfaces. Probalby:
- two NVME only
- four SATA
So giving one interface away for something stupid is a pity, and drives you even more in the direction of external usb-devices (which can be nvme-devices :wink:
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
It is so so incredibly stupid that you need to spend a whole disk and related an disk interface for the boot device. Where even a small disk is nowadays at least 10 times bigger what is needed as boot partition !!
I echo what @Ericloewe already wrote. The argument about wasting disk capacity would be true for someone who earns 50 cents per hour. But even at minimum wage we are talking about 2-3 hours of work time. And if we look at professional rates this comes down to like 15 minutes or less. Adding the complexity inherent with checking partition tables, taking into account free space etc. is obviously a really stupid idea relative to that amount of saving.

If someone wants to do this as a personal research project that is of course fine. And I have done comparable things, even quite recently. But that was a conscious decision and it is an entirely different discussion.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Whatever, the problem is NOT COST. The problem is that a normal pc has a very limited number of harddisk interfaces. Probalby:
- two NVME only
- four SATA
So giving one interface away for something stupid is a pity, and drives you even more in the direction of external usb-devices (which can be nvme-devices :wink:
If that is the number of connections your motherboard has and it is not enough for you, it seems to me that you purchased the wrong motherboard.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Yes, why buy consumer-grade motherboards, when you can get a new Supermicro-x10sdv-2c-7tp4f with 18 SATA ports, 2SATADOMS, SFP+ built in, etc. for just $551? Plenty fast for HDD SOHO storage, two PCI 3.0x8 slots if you need even more capacity. Bulletproof by design, low power, what is not to like?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Two cores is on the low end of things, but I'm sure other models with upgraded CPUs are within reach on the used market.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
On the low end if you use SSDs. But the benefit of this board vs. my higher-core D-1537 is that the max. clock speed for the D-1508 is higher so it’s a better fit for SMB, which is single-thread.

Given my luck with attempts at VMs, jails, etc. the D-1508 would have been a much better fit for me than the D-1537. The D-1537 may yet redeem itself with Plex, if and when I decide to implement it. Everything else, like BlueIris on Windows, ZoneMinder, was a bust for me.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And yes I agree it is not optimal to use an usb-disk as boot device, but please take the reason for that away !!!

The reason that this is not optimal is something beyond the control of anyone here on these forums, at iX, etc. It's that USB boot devices typically lack a decent controller with wear leveling, and that other catastrophically bad choices in their design cause other problems. So unless you have some magical way for us to "take the reason for that away", it isn't going to happen.

Just like some of us fondly recall the days when UNIX kernels fit in less than 1MB of RAM, this is no longer feasible due to the growth in code size. Sometimes things just aren't practical even if we'd like them to be.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Whatever, the problem is NOT COST.
No, the problem is that you've joined this thread to begin to argue, in a quite inflammatory manner, something that has nothing whatsoever to do with this thread. And despite having been told that, you persist.

Edit: If you really feel the need to argue this point, this resource at least addresses the question:
 
Last edited:

rvassar

Guru
Joined
May 2, 2018
Messages
972
Whatever, the problem is NOT COST. The problem is that a normal pc has a very limited number of harddisk interfaces. Probalby:
- two NVME only
- four SATA
So giving one interface away for something stupid is a pity, and drives you even more in the direction of external usb-devices (which can be nvme-devices :wink:

This is a motherboard choice made by the people that engineer the "normal PC" motherboards. Most of us here use server motherboards with SAS/SATA HBA controllers because they provide the ports & resources we need.

With that in mind, you seem to have a couple misconceptions:

1. "harddisk interfaces" should be storage device interfaces. There are several to chose from. ZFS originated on parallel SCSI & FC/AL attached SCSI, most people now days use SATA, SAS, NVMe, and yes even USB. USB is not recommended for storage pools here. Booting from USB used to be recommended until just recently. This storage pool recommendation has solid reasoning and a couple decades of practical experience (aka bad luck) behind it. It's not going to change because you want it to. If you need more ports, get an PCIe HBA. You can even get external SAS ports and toss the drives in a separate case, just like you expect with USB. All SAS HBA's will talk to SATA drives. There's some gotcha regarding mixing and cable lengths, but that's all covered in one of the guides.

2. NVMe devices are PCIe attach devices. That's it. It's a specification, and some PCIe lanes. You want more NVMe devices, grab a x16 paddle board and stick 4 of them in your GPU slot. If your board doesn't support PCIe bifurcation, well... You bought the wrong motherboard. Most M.2 sticks need 4 PCIe lanes. There are other NVMe form factors, and even PCIe switches that will allow you to add more devices. But again, your motherboard has to support the features required to implement switched PCIe. A consumer chipset likely won't allow it. It's only been available in enterprise kit for a couple years.

3. When you stick a NVMe M.2 stick in a USB enclosure, you're adding another device that implements it's own PCIe bus, and presents it as USB attached storage with all the problems that come with USB attached storage.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Booting from USB used to be recommended until just recently.
We started recommending against USB sticks as boot devices with the release of FreeNAS 9.3, which made the boot device a live ZFS pool. FreeNAS 9.3 was released in 2014.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
But again, your motherboard has to support the features required to implement switched PCIe.
PCIe switches are a very basic part of the specification, in part due to the legacy of conventional PCI, which already had bridges (to join two PCI busses). Switches, at the protocol level, are not fundamentally different. And the systems present multiple busses anyway, especially PCIe systems, so this is one of those areas of the stack that sees a lot more exercise than one might think at first glance.

It is very unlikely that a system that supports PCIe would have trouble dealing with a switch.

When you stick a NVMe M.2 stick in a USB enclosure, you're adding another device that implements it's own PCIe bus, and presents it as USB attached storage with all the problems that come with USB attached storage
I have a different take. With the advent of USB 3.x and UASP, the host side of the equation is a lot less sucky than back in the day. It's not PCIe sort of fast, but it is SATA sort of fast. Thus, the limiting factor is the crap quality of USB devices, so that a reputable USB/NVMe bridge (which do exist, building on the legacy of good USB/SATA bridges which started showing up a few years ago) with a reputable SSD can actually make for a very interesting solution. The key limitations then are cost and form factor, rather than deeper technical issues.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
We started recommending against USB sticks as boot devices with the release of FreeNAS 9.3, which made the boot device a live ZFS pool. FreeNAS 9.3 was released in 2014.
The shift was definitely a result of 9.3's ZFS root filesystem, but I think it took a few months to a year or two for the deep suckiness to sink in.
In my case, I upgraded to 9.3 with a pair of new Toshiba USB 2.0 16 GB flash drives. All was good at first, but a year in, updates took 40+ minutes and were getting unbearable.
So I'd say we don't really recommend them since around 2016, which, fun fact, is closer to the release of FreeNAS 8 than to the present day.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I think it took a few months to a year or two for the deep suckiness to sink in.
OTOH, even pre-9.3, it was becoming obvious that USB sticks sucked as boot devices. But whether 2014 or 2016, it certainly wasn't "recently" that they were un-recommended.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
OTOH, even pre-9.3, it was becoming obvious that USB sticks sucked as boot devices. But whether 2014 or 2016, it certainly wasn't "recently" that they were un-recommended.

I showed up in 2018, and it was still accepted practice that I quickly abandoned. But I'm old... 5 or 6 years ago is "recently". o_O
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
but I think it took a few months to a year or two for the deep suckiness to sink in.

I believe part of the issue was that they didn't enable noatime. The original boot strategy was a dual image booter for flash that employed read-only on UFS partitions, which is largely impervious to flash write endurance issues, but when they flipped to ZFS boot, I believe that they failed to understand that files that are opened for READ will get an inode atime update unless some other conflicting feature blocks it.

Most folks who have used UNIX systems are "used" to hard disk and don't particularly reconcile the fact that reading a file also implies the atime update write. For most users of conventional UNIX, this wasn't a problem as HDD had virtually unlimited endurance. However, over in the USENET server world, we were seeing I/O rates of thousands of requests per second on midsize servers, with roughly half of that being atime writes. Once I identified this, I wrote a patch for FreeBSD to disable atime updates, which prompted the filesystem folks to do a more proper implementation that allowed it to be disabled with a fstab flag. That was sometime back in the mid '90's.

Anyways, the underlying issues are that developers generally don't understand all the implications inherent in a complex system, and when the switch was made to ZFS, it was assumed that the read-only flag and multiple boot partition strategy was not needed any longer. However, the read-only flag was doing double duty by protecting the flash from superfluous atime updates. I believe the first few versions of ZFS boot based FreeNAS went out the door without atime being disabled on root. This would have been devastating to the endurance of USB thumb drive style boot devices.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
PCIe switches are a very basic part of the specification, in part due to the legacy of conventional PCI, which already had bridges (to join two PCI busses). Switches, at the protocol level, are not fundamentally different. And the systems present multiple busses anyway, especially PCIe systems, so this is one of those areas of the stack that sees a lot more exercise than one might think at first glance.

It is very unlikely that a system that supports PCIe would have trouble dealing with a switch.

Agree, I'm thinking you're going to have trouble with the support in a retail motherboard. It won't be a hardware problem, but a retail motherboard BIOS is almost certainly going to choke on it. Maybe it works if it's not in the boot path. My experience here definitely trends to the tightly integrated solutions.

I have been watching the eBay listings for the Sun/Oracle x8 switches... I've no experience with them, but they pop up every time I search, and they're under $50 these days. Probably need to start another thread...

I have a different take. With the advent of USB 3.x and UASP, the host side of the equation is a lot less sucky than back in the day. It's not PCIe sort of fast, but it is SATA sort of fast. Thus, the limiting factor is the crap quality of USB devices, so that a reputable USB/NVMe bridge (which do exist, building on the legacy of good USB/SATA bridges which started showing up a few years ago) with a reputable SSD can actually make for a very interesting solution. The key limitations then are cost and form factor, rather than deeper technical issues.

It's still USB. You're still dealing with a bus that has to handle power overload, and things getting unexpectedly disconnected either by power constraints or human hands. SAS/SATA hot plug happens too, but not (usually) in entire chains. (*cough* dropped floor tile...)
 
Top