Raid 10 setup help please

doox00

Dabbler
Joined
Feb 15, 2022
Messages
13
Hey all, I am new to truenas and linux in general. I setup a new server with proxmox and running truenas scale in a vm. I have passthrough for my LSI HBA controller it mode with 6 16tb hdds attached. I would like to set these up in raid 10 but I am not seeing that option when I go to create a new pool. It lists all 6 disks but not raid 10 option. How do I go about setting these up in raid 10?

Thank you!
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
zfs doesnt have raid10. that is a raid technology. zfs is not raid.
for the majority of uses, raidz2 is recommended, but also, reading the fairly large number of docs is also hugely recommended.
zfs is an enterprise solution, and it assumes you have things like support admins and backups.
the closest zfs thing to "Raid10" is a pool of multiple mirror vdevs.
my signature should have a list of some of the core reading material to TrueNAS safely.
 

doox00

Dabbler
Joined
Feb 15, 2022
Messages
13
Hey, thanks for the reply. Is this what you were referring to? Would this give performance similar to striped drives? thank you!

Screenshot 2022-02-15 233139.jpg
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
a zfs pool is always striped across every vdev.
this will give you 3 striped vdevs, with each vdev being a 2-way mirror.
you can lose any 1 drive, or any 3 drives from *different* vdevs. if 2 drives in the same vdev are lost, the pool dies.
you will also lose 1/2 the space to redundancy.
depending on what you are planning to store, raidz2 can give better performance characteristics AND more storage, as well as better overall redundancy (a raidz2 of 6 drives can lose *any* 2 drives).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
raidz2 can give better performance characteristics

RAIDZ2 generally performs more poorly, with one notable exception. Because a RAIDZ2 is typically composed of a large number of drives, 6-12, it may be able to outrun a similar mirror pool on write speeds.

Mirror vdevs are faster in almost every other way. Because each vdev is independent of the others, you have a much larger number of simultaneous IOPS that can be processed -- for example, with a 12 drive pool, and six two-way mirrors, you can be doing six separate write operations to different disk regions simultaneously, or twelve read operations to different disk regions (because each side of a mirror can be servicing different read requests). The same vdev in a RAIDZ2 configuration is limited to a single operation at a time, or possibly two or three if the operations happen to be on very small blocks.

better overall redundancy

Not really true. While it is true that RAIDZ2 is more redundant than a mirror pair, it is important to remember that three-way mirrors are definitely a thing, and give you the ability to lose up to two drives in any vdev. In a 12 disk pool, this really does mean that there is a scenario where you can lose EIGHT drives and still not lose the pool, but of course one component in each vdev needs to survive. But of course you only get 1/3rd the raw space.

To the OP:

In general, while ZFS has some similarities to conventional RAID levels, there are also some significant differences. It may be a better idea to describe what your workload is, and get some more specific advice as to what would work best.

RAIDZ is optimized towards large sequentially accessed files and a small number of simultaneous users (ideally just one). They give you great space efficiency and redundancy options including RAIDZ3.

Mirrors are optimized towards random access and large numbers of simultaneous access. You actually do not get the full advantage of mirror pools until you have several (/many?) simultaneous accesses going on. It is also somewhat easier to grow the size of a mirror pool for various pool and vdev composition reasons.

We're happy to help educate you if you describe what you want to do.
 

doox00

Dabbler
Joined
Feb 15, 2022
Messages
13
RAIDZ2 generally performs more poorly, with one notable exception. Because a RAIDZ2 is typically composed of a large number of drives, 6-12, it may be able to outrun a similar mirror pool on write speeds.

Mirror vdevs are faster in almost every other way. Because each vdev is independent of the others, you have a much larger number of simultaneous IOPS that can be processed -- for example, with a 12 drive pool, and six two-way mirrors, you can be doing six separate write operations to different disk regions simultaneously, or twelve read operations to different disk regions (because each side of a mirror can be servicing different read requests). The same vdev in a RAIDZ2 configuration is limited to a single operation at a time, or possibly two or three if the operations happen to be on very small blocks.



Not really true. While it is true that RAIDZ2 is more redundant than a mirror pair, it is important to remember that three-way mirrors are definitely a thing, and give you the ability to lose up to two drives in any vdev. In a 12 disk pool, this really does mean that there is a scenario where you can lose EIGHT drives and still not lose the pool, but of course one component in each vdev needs to survive. But of course you only get 1/3rd the raw space.

To the OP:

In general, while ZFS has some similarities to conventional RAID levels, there are also some significant differences. It may be a better idea to describe what your workload is, and get some more specific advice as to what would work best.

RAIDZ is optimized towards large sequentially accessed files and a small number of simultaneous users (ideally just one). They give you great space efficiency and redundancy options including RAIDZ3.

Mirrors are optimized towards random access and large numbers of simultaneous access. You actually do not get the full advantage of mirror pools until you have several (/many?) simultaneous accesses going on. It is also somewhat easier to grow the size of a mirror pool for various pool and vdev composition reasons.

We're happy to help educate you if you describe what you want to do.

Hey there, thanks for the explanations. This is my home server and will hold movies/tv/music (for the bulk of it) plus all my personal files/docs etc. I had a Windows 2019 Server prior with 8 4tb wd blue ssd's attached to an LSI HBA (IT mode) but the performance was absolutely awful. Transfers would be decent speed for a few seconds then die to 0 for 3 to 10 seconds.. continue for a little bit then repeat. I was using storage spaces with raid 5 config (had 2 raid 5's, each with 4 ssds). So with the new server I just built (EPYC 7402P CPU 256GB ram, Supermicro pci-e 4 mb etc) I did not want to experience such bad performance so decided to try Proxmox/Truenas. I needed more space so bought 8 16tb wd gold drives, 6 for the network share and 2 to use for backups. Going to wipe and sell the 8 4tb ssds (only a year old approximately). I also have 2 1tb nvme drives in the server that are mirrored for my VMs and Proxmox is installed on 2 120gb ssds that are also mirrored.

I have multiple pcs on my network and many other devices like phones, ip cameras, tablets, htpcs etc so wanted a fast server that will not be bogged down on the file sharing aspect of it. I ended up creating the 3 separate vdevs in the pool like pictured above last night and started moving my backup over to it. So far so good, I just have a single 16tb drive attached to the truenas vm that has all my data on it, so it has been copying at its max read speed since last night (~250mb/s) without any slows downs. Also it is at ~200 mb/s right for some reason.. I am resetting up sonarr, radarr, lidarr etc so it could be doing something like reimporting my libraries or something.. I dunno lol.
 

doox00

Dabbler
Joined
Feb 15, 2022
Messages
13
Actually the transfer is at like 100MB/s now still doing large video files.. so should be closer to 250 I would think, ideas on how to see why it is performing more slowly now?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I did not want to experience such bad performance so decided to try Proxmox/Truenas.

So why did you saddle yourself with Proxmox?

As often discussed, Proxmox is not really known for working well with FreeNAS/TrueNAS, or, at least, as one of the people who end up chatting with would-be virtualizers on these forums, it's the one that people come in reporting problems with.

There's a well-documented path to successful virtualization of FreeNAS and TrueNAS.


It's to use ESXi, with PCIe passthru. And just because the article is nearly ten years old does not mean that the information therein is stale.

But here's the other thing. Scale runs KVM just like Proxmox. Can you explain why it is that you don't just run Scale on the bare metal, and then run some KVM VM's for whatever workloads you planned to use Proxmox for? I would really love it if someone would clue me in why so many people are coming in here with this un-recommended hypervisor and then loading Scale in as a VM. Where did you get this misbegotten idea from? I am truly interested in knowing, because if I could bludgeon someone over the head to get it to stop, it'd save me having this discussion several times a week. ;-)

Proxmox themselves describe their PCIe passthru functionality as experimental. and it's only been around since about 2018, and I seem to run across forum participants with Proxmox problems frequently. To be fair, we had a bunch of that in the early days of ESXi 4 with Westmere and Nehalem, but those appear to have been actual hardware or mainboard deficiencies, and lots of us have been virtualizing under ESXi for a decade or more. That works fine if you follow the formula.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
I would really love it if someone would clue me in why so many people are coming in here with this un-recommended hypervisor and then loading Scale in as a VM.

Keep in mind... Proxmox allows clustering. Not that you'd likely use that for TrueNAS. But ESXi's free license doesn't allow API write access, and you can't vMotion stuff from node to node, etc... A lot of homelab types are looking to hone job skills, simulate real production scenarios, etc... Proxmox has their nose under the tent by being just slightly more "free" than VMWare's "free".
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
*cries* but...your data! think about the data? who will save...the data?

I tried to virtualize a backup server. I hated it, it was such a nuisance, anytime I need to do anything with esx, like updates, or trying to get any kind of experiment to work, or fix it, or change hardware, anything, i had to bring down the backup server, and I found just that so annoying (admitedly, that was partly cuz at the time it tooklike 5 minutes to disable any replication, or get emails every hour the backup was down). I cant even imagine how much that would suck with iffy proxmox on a main storage server...

I think people hear about cloud and virtualisation and they think that it's a panacea that solves all problems and that they can just glue everything together, like the "internet gateways" ISP's provide with their internet services. yes, it technically does router/AP/switch/modem/webserver/fileserver/DDNS client/etc in one device...but it tends to do all mediocre-ly, at best, but is also nowhere near a complex, with as many things that can go wrong, as virtualized TrueNAS.
 
Last edited:

doox00

Dabbler
Joined
Feb 15, 2022
Messages
13
So why did you saddle yourself with Proxmox?

As often discussed, Proxmox is not really known for working well with FreeNAS/TrueNAS, or, at least, as one of the people who end up chatting with would-be virtualizers on these forums, it's the one that people come in reporting problems with.

There's a well-documented path to successful virtualization of FreeNAS and TrueNAS.


It's to use ESXi, with PCIe passthru. And just because the article is nearly ten years old does not mean that the information therein is stale.

But here's the other thing. Scale runs KVM just like Proxmox. Can you explain why it is that you don't just run Scale on the bare metal, and then run some KVM VM's for whatever workloads you planned to use Proxmox for? I would really love it if someone would clue me in why so many people are coming in here with this un-recommended hypervisor and then loading Scale in as a VM. Where did you get this misbegotten idea from? I am truly interested in knowing, because if I could bludgeon someone over the head to get it to stop, it'd save me having this discussion several times a week. ;-)

Proxmox themselves describe their PCIe passthru functionality as experimental. and it's only been around since about 2018, and I seem to run across forum participants with Proxmox problems frequently. To be fair, we had a bunch of that in the early days of ESXi 4 with Westmere and Nehalem, but those appear to have been actual hardware or mainboard deficiencies, and lots of us have been virtualizing under ESXi for a decade or more. That works fine if you follow the formula.

I was debating if I wanted to run Unraid or Proxmox.. I was not even sure of which nas server I was going to run when I was trying to figure it this out. I really liked Unraid, it has a nice ui, beginner friendly but has a cost of 90 a year (for number of disks I was going to run) which is not that big of a deal really, also no zfs which seemed a negative. There is a lot of Proxmox love out there, bit more learning curve, supports zfs (which seems everyone praises and recommends). I used to run esxi 6.7 on my old dell r730 server I am no longer using. It ran without issues, bit of a learning curve setting that up. My mistake there was again running windows server in a vm for the file sharing, which I had slow performance with file transfers. I was a little scared to run a linux based file share server as I am not all that familiar was linux and was scared of what I would do if something broke. I still have that fear now but figured I need to deal with it and just do it and when/if a problem arises I will have to figure it out. I plan to keep regular backups of my files, proxmox and truenas.

As far as running truenas in a vm, I did not realize or think it would be a problem. Saw some youtubers (decent sized channels) doing it so figured I would give it a try. I had no issues with hba passthrough, all seems working okay.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Keep in mind... Proxmox allows clustering. Not that you'd likely use that for TrueNAS.

Okay, but, this guy, here, 24 core host with 256GB RAM, I don't think he's clustering. Well maybe not.

But ESXi's free license doesn't allow API write access, and you can't vMotion stuff from node to node, etc...

I suppose. But on the other hand, the home lab guys are usually the same ones who have experienced the "joy" of running three instances of ESXi and a VCSA on a laptop under VMware Workstation, so they're also usually not the ones complaining about performance. I am not saying you're wrong, but I'm not sold.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think people hear about cloud and virtualisation and they think that it's a panacea that solves all problems and that they can just glue everything together, like the "internet gateways" ISP's provide with their internet services. yes, it technically does router/AP/switch/modem/webserver/fileserver/DDNS client/etc in one device...but it tends to do all mediocre-ly, at best, but is also nowhere near a complex, with as many things that can go wrong, as virtualized TrueNAS.

Cloud's just a convenient way to make your problem into someone else's problem. Then you have a scapegoat when it all goes sideways.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Cloud's just a convenient way to make your problem into someone else's problem. Then you have a scapegoat when it all goes sideways.
sigh. ya. the company i work for does this, often. and is in the processes of sending all backups to the magical "cloud blob".
I'm really unsure how that's gonna go...but I get paid to try and keep backups from failing so *shrugs*.
watching enterprise s*** fail because they cheaped out makes me not very tolerant of my own s*** also failing.
 

doox00

Dabbler
Joined
Feb 15, 2022
Messages
13
I have all my data restored to my new truenas install and testing performance of large file transfers, writing to the network is very good, getting 550-600 MB/s write speeds but reading is only about 250 MB/s (which is about the max of a single drive). Is this normal, I figured I would be getting around ~500 MB/s read speeds as well. This is over a 10G network connection. Have the pool setup like in the photo above (3 mirrored vdevs in the pool).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
On a Proxmox virtualized host? I'd think that could be fine. Latency effects tend to pile up on reads because of the request-answer-request-answer interactions, with the NAS needing to go out to HDD, talk on the net, causing all sorts of interrupt and timeslice foo. Remember, the NAS *responds* to the client, it doesn't necessarily know what the client is going to do. Writes naturally go faster because clients can shovel data at the NAS without all the little delays.
 

doox00

Dabbler
Joined
Feb 15, 2022
Messages
13
On a Proxmox virtualized host? I'd think that could be fine. Latency effects tend to pile up on reads because of the request-answer-request-answer interactions, with the NAS needing to go out to HDD, talk on the net, causing all sorts of interrupt and timeslice foo. Remember, the NAS *responds* to the client, it doesn't necessarily know what the client is going to do. Writes naturally go faster because clients can shovel data at the NAS without all the little delays.

Okay, but I figured it would be faster than this. Are there any recommended steps I could take to tune or speed it up somehow?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I would compare it to the NAS on bare metal. I'm not really sure just how much you might be getting hurt by virtualization overhead and it would be really interesting to find out.
 

doox00

Dabbler
Joined
Feb 15, 2022
Messages
13
I would compare it to the NAS on bare metal. I'm not really sure just how much you might be getting hurt by virtualization overhead and it would be really interesting to find out.
Well not exactly sure what change helped but now I have ~600 MB/s reads and writes. I added another virtual network interface via proxmox to the truenas vm and I upped the ram to 96GB (from 64gb) and 12 cores (had 8 before) to the truenas vm. Now I am getting 500-600+ MB/s read speeds as well as writes. Maybe the reboot alone helped? I had not rebooted truenas since adding the new pool, not sure that would make a difference or not.
 

doox00

Dabbler
Joined
Feb 15, 2022
Messages
13
Ok next thing, I have a single 16tb hdd I added to Truenas where I had backed up my data to copy to the new pool. I want to remove that pool and drive from the Truenas server. I also want to wipe the data on that drive (so it is not recoverable). Do I just Export/disconnect that pool via the Truenas UI? There is a destroy data option, will that actually make the data unrecoverable? Thanks again for all the help.
 
Top