2 x Mirror - 2 Wide Question

tool_462

Cadet
Joined
Dec 15, 2022
Messages
9
I am playing around with 4 x Intel P4510 1TB u.2 drives for future usage as a VM/app target.

I am curious if the current setup (picture attached) is doing what I think it is.

My interpretation is that the mirrored pair is striped against the other mirrored pair. I can easily saturate 10gbps on reads and I haven't tested any cli disk benches so I haven't compared read/write/iops for different configs.

Is my interpretation correct and will this setup facilitate a read performance increase?
 

Attachments

  • Screenshot_20231127-202854.png
    Screenshot_20231127-202854.png
    232.6 KB · Views: 330
  • Screenshot_20231127-202854.png
    Screenshot_20231127-202854.png
    232.6 KB · Views: 304

linus12

Explorer
Joined
Oct 12, 2018
Messages
65
It is my understanding, based on my usage of mirrored jobs in my system and the information you have provided, there is no striping in this situation.

While my screens look very much like this, I would suggest you post the results of the screen "Manage Devices" for a complete picture. in my case it shows the following:
1701159986393.png

which makes it very clear how the drives are managed through the VDevs.

In my case I have 2 VDevs. Each VDev consists of two drives, mirrored. So Drive sde is Mirrored to Drive sdf (i.e. they are copies of one another); Drive sdg is Mirrored to Drive sdh (again, they are copies). When combined in my pool it gives me the total available drive space of 14.3 TB usable. A file (or part of the file) is only contained on one VDev, and thus is written ONLY on a Disk and it's Mirror (in this case).

So if a file is written to Disk sde, it is only contained on Disk sde and Disk sdf (it's mirror). Nothing is written on Disk sdg or sdh.

IF it is a very large file, and ZFS decides to write the first half of the file on the VDev consisting of Disk sde and sdf, and the second half of the file on the VDev consisting of Disk sdg and sdh, then the first half only resides on Disks sed (with a copy of the first half on Disk sdf) and the second half only resides on Disk sdg (with a copy of the second half on Disk sdh).

If any of the experts wish to chime in, especially if I have screwed up the explanation, please do so. But I'm 90% sure this is how it works, again assuming your VDevs are set up similar to mine.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
[...] We’ll configure six 2-way mirror vdevs. ZFS will stripe the data across all 6 of the vdevs. We can use the work we did in the striped vdev section to determine how the pool as a whole will behave. [...]

:smile:
 
Last edited:

linus12

Explorer
Joined
Oct 12, 2018
Messages
65
In years passed, on Disk systems before ZFS, "stripe" referred to the splitting of data AND the creation of Parity files to allow the better recovery when one (or more) disks in a "striped set" failed. "Mirrored" disks did not contained those Parity files, and were simply "mirrors" of each other. At the time I learned this the terms "Stripped Disks" and "Mirrored Disks" were mutually exclusive! Things have certainly changed!

This is pointed out in a much nicer way in the document referred to by Davvo above and I stand corrected. (One thing the document doesn't show is a simple 3 disk Z3 configuration (i.e. not using mirrored drives) or is that included in a different document?)

Davvo, I would assume that the TrueNAS GUI would indicate a Z[Level] if the parity drives were involved?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
This is pointed out in a much nicer way in the document referred to by Davvo above and I stand corrected. (One thing the document doesn't show is a simple 3 disk Z3 configuration (i.e. not using mirrored drives) or is that included in a different document?)

Io ZFS you must have at least 4 drives in a VDEV in order to use RAIDZ3 (suggested at least 5): the number after the Z is the number of parity drives in the VDEV.

The maximum resiliency you can get with 3 drives is a 3-way mirrors, where each drive contains the same data and can withstand losing up to 2 drives.

Davvo, I would assume that the TrueNAS GUI would indicate a Z[Level] if the parity drives were involved?
As explained in the resource, in a mirror VDEV every drive is an identical copy of the other. You don't see any Z because it's not a RAIDZX configuration but a MIRROR one.
 

tool_462

Cadet
Joined
Dec 15, 2022
Messages
9

:smile:

Fantastic! I was searching and reading the official documentation but this buried in a PDF probably limited my search success. Really explains and validates what I'm seeing in performance now that I've had more time to test.

@linus12 - Thanks for the discussion and my configuration is identical to yours. Given that I'm happy with the layout and have space/performance greatly exceeding my needs I'm going to leave it as is. Maybe in the future I will revisit the configuration here!
 
Top