Parity Drives

x88dually

Cadet
Joined
Jun 20, 2019
Messages
9
Media Server

Picture you're standing in front of a 24 bay 4U rack case. 6 drives per column, Col # 1 is on left, drive #0 is the bottom. ZFS has 3 parity drives,
Question :

Is there a way to make drive #0 the parity drive for each column, while all 4 columns end up in the same drive letter ? or make a seperate drive letter for each column, and all columns in the same array ?
 

x88dually

Cadet
Joined
Jun 20, 2019
Messages
9
Ok.
Currently have 9 drives, and looking for more space/drive.

Can i expand as needed ? or Does every drive have to be in when starting ?
 
Joined
Jul 3, 2015
Messages
926
What's the layout of your zpool and how big are your drives?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079

x88dually

Cadet
Joined
Jun 20, 2019
Messages
9
ok, sorry guys.

Currently windows drive letters.
Freenas will be first time anything other then windows., kinda stupid noob here.
4x8tb, 2x5tb, 1x4tb, 2x2tb. BUT, new media server will be 10tb's.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Freenas will be first time anything other then windows
That's fine. There is a lot to learn though. FreeNAS is a Unix variant and is very different from Windows in every way.
new media server will be 10tb's.
Does this mean that you intend to build an entirely new system to run FreeNAS on that will have some quantity of 10TB drives?

Because of the differences between how FreeNAS and the ZFS file system work and just about everything else, there are some hardware differences that may be of concern. Have you already purchased the components you were planning to use or are you preparing to buy?

Always remember that it is better to ask first to avoid mistakes that would put your data at risk.

Here are some guides to help you with your learning:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://www.ixsystems.com/community...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Overview of ZFS Pools in FreeNAS from the iXsystems blog:
https://www.ixsystems.com/blog/zfs-pools-in-freenas/

Terminology and Abbreviations Primer
https://www.ixsystems.com/community/threads/terminology-and-abbreviations-primer.28174/

Why not to use RAID-5 or RAIDz1
https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/

The 'Hidden' Cost of Using ZFS for Your Home NAS
https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html

FreeNAS® Quick Hardware Guide
https://www.ixsystems.com/community/resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev. 1e) 2017-05-06
https://www.ixsystems.com/community/resources/hardware-recommendations-guide.12/

Hardware Recommendations by @cyberjock - from 26 Aug 2014 - and still valid
https://www.ixsystems.com/community/threads/hardware-recommendations-read-this-first.23069/

Proper Power Supply Sizing Guidance
https://www.ixsystems.com/community/threads/proper-power-supply-sizing-guidance.38811/

Don't be afraid to be SAS-sy
https://www.ixsystems.com/community/resources/don't-be-afraid-to-be-sas-sy.48/

Confused about that LSI card? Join the crowd ...
https://www.ixsystems.com/community/threads/confused-about-that-lsi-card-join-the-crowd.11901/
 

x88dually

Cadet
Joined
Jun 20, 2019
Messages
9
1 supermicro 24 bay hot swap case, SM board with i540 10g nics on board, E5-2650 v2 cpu's, 64g ram
2 SM board, i540 10g nic, E5-v2 OR Xeon X5680 cpu's, depending on ver 2 matching pair prices
Currently at 48.5tb used out of 50tb.
Looking to buy 10 Seagate 10tb Exos X10's. Secondary media server, not sure yet, just may take all drives into 1 supermicro case and just do windows til i can buy more 10tb drives for it.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No. There is no "parity drive" that's how RAID4 works which is not used any more. RAIDZ is roughly like RAID5 and distributes parity across all drives.

As @myoung points out, that's how RAID4 works. I'm going to take issue with "RAIDZ is roughly like RAID5" because that's sufficiently deceptive in this context as to be basically wrong.

RAID5 rotates the parity block between drives to avoid the "parity hotspot" RAID4 enjoys. You can precompute the location of all parity sectors in the array as it is a simple rotation.

Unfortunately, what ZFS does is completely different. It stores parity for each *block* of data, and because it allocates space on the fly, the parity sector locations end up "whereever they fall." Within a single (large) block, it resembles RAID4 in that the parity is uniformly written on the same disk. Because ZFS does not require the read-xor-write cycle to update parity that conventional RAID does, this isn't really an issue.

https://extranet.www.sol.net/files/freenas/fragmentation/RAIDZ-small.png

If you look at that image, you'll get the idea. :smile:
 

x88dually

Cadet
Joined
Jun 20, 2019
Messages
9
jgreco,
Ty, I think I understood the explaination ,but the pic now has an image in my head, TY
So 24 bay, RaidZ3 ? Parity gets wriiten to all 24 drives ? or should i make 3 sets of 8 ? Whats better/easier ?
In 8d and 9c, what does X stand for ? p is parity, d is data, whats X ?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Four vdevs of six drives in your pool will give you a bit more performance than three vdevs of eight drives, but either is fine. Here is a calculator that will help you figure out how much storage space that will net you:

https://wintelguy.com/zfs-calc.pl
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079

myoung

Explorer
Joined
Mar 14, 2018
Messages
70
As @myoung points out, that's how RAID4 works. I'm going to take issue with "RAIDZ is roughly like RAID5" because that's sufficiently deceptive in this context as to be basically wrong.

RAID5 rotates the parity block between drives to avoid the "parity hotspot" RAID4 enjoys. You can precompute the location of all parity sectors in the array as it is a simple rotation.

Unfortunately, what ZFS does is completely different. It stores parity for each *block* of data, and because it allocates space on the fly, the parity sector locations end up "whereever they fall." Within a single (large) block, it resembles RAID4 in that the parity is uniformly written on the same disk. Because ZFS does not require the read-xor-write cycle to update parity that conventional RAID does, this isn't really an issue.

https://extranet.www.sol.net/files/freenas/fragmentation/RAIDZ-small.png

If you look at that image, you'll get the idea. :)

That's interesting. That image helps, but I'd like to understand better.

Do you know any good ZFS resources that go into more detail than the basic guides, but don't require reading source comments?
 
Last edited:

x88dually

Cadet
Joined
Jun 20, 2019
Messages
9
myoung,
all i know is, this calc has me using only 65% of 240tb, 156 useable TB. I lose 8 1/2 10tb drives.
I am so not happy !! For what i'll be losing, i can just stick with windows drive letters and when another drive craps out, just copy it back from back up server. And i use that lost 83TB's. GGRRRRRRRR. Thats $ 2600 i can use for other parts/equipment.

Now i rememeber why i didnt do any type of raid 2 1/2 yrs ago.
 

myoung

Explorer
Joined
Mar 14, 2018
Messages
70
Yeah, it's a trade-off. You need to determine how valuable your data is and how much risk you are willing to take that you might lose some/all of it.
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Windows drive letters” will not protect your data in any way.. the closest you would get to ZFS on Windows would be ReFS on storage spaces, but it’s not as good. And you still have the redundancy issue, there is simply no way of having your cake and eating it too.
 

x88dually

Cadet
Joined
Jun 20, 2019
Messages
9
But like i said, when a drive takes a dump, i copy back up server onto new drive.
IF a controller takes a crap, guess what, you lose that array and alll the data on the drives connected to it. How you rebuilding that ? I'm copying from my back up server.

Question:
i'm using 4x8tb drives, can i buy 2 more and make a vdev of 6x8tb, then share that. buy 5x10Tb raid 1 or raid 2, and share that seperately ??
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
We are not talking about shares even a little at this point..

ZFS won’t care if a HBA card “take a crap” as all you would need to do is to replace the card and import the pool. Sure there are scenarios where the card dying would also corrupt the pool but you will find ZFS is very reliable.

The problem with your setup of relying on a backup server is that you can easily backup corrupt data, I'm speaking from experience..

And without checksums you have no way of knowing what it is you are reading. With ZFS you know that what you read is the same as you once wrote to disk.
 

myoung

Explorer
Joined
Mar 14, 2018
Messages
70
Question:
i'm using 4x8tb drives, can i buy 2 more and make a vdev of 6x8tb, then share that. buy 5x10Tb raid 1 or raid 2, and share that seperately ??

Yes, you can do those things. Before you buy any hardware you should set up a VM or test machine and play with ZFS a bit. Learn the how vdevs/pool/datasets work. ZFS is a very powerful tool, it has a lot of features that keep you safe.
 

x88dually

Cadet
Joined
Jun 20, 2019
Messages
9
When i download a tv show or movie, i play it an skip through it back and forth a few times to make sure it works before it goes to the encoding folder.

During the import of the pool thru a new hba, do you lose data ?

Can i expand a vdev with same size drive ?
 
Top