20Tb RAID-Z3 build feedback (in development)

Status
Not open for further replies.

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
It's explained in the thread but, yep, MTTPR stands for Mean Time To Physical Replacement and it's the time taken to physically replace a failed drive ;)
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Read the f*cking manual :D
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
what? why? i was actually thinking of 3x 5-in-3 bay configuration in a case like yours... :(

Most cases have little guide rails to hold optical drives in place. There's no room for these on 5-in-3 bays, but there is on 4-in-3 bays.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
72h MTTPR is just a long weekend spent anywhere, but next to the server.

If anybody takes some vacation time, then MTTPR could even be 168h or 240h (7 or 10 days). Unless the angel who waters the plants knows how to exchange hardware and then replace the disk using the FreeNAS GUI...

P.S.
Got the calculator running in IE only. Clearly my security levels are somewhat excessive...
 
  • Like
Reactions: Xam

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
@Xam, I might not understand your storage needs, but I hope I can offer some input, by looking at your build from a different angle.

Many of posters here live in the areas where they can get most of electronic components with the next day delivery. Some have two systems, with the second one being a backup system that also doubles as a live stand-by.

Please take your component list and go through them one by one thinking what you would do if that component fails 3-6 months after it is deployed (or 2 years + 1 day after the date of the original purchase). I heard that EU is enforcing two year warranty on almost anything, but that is not the answer.

Why???

Some esoteric choices are excellent ideas at the time of the purchase, as they effortlessly solve many problems. However, since they are not the mainstream, it might become difficult to be able to find compatible components after as short time as a year or two.

Some scenarios to consider:
* a failure on the 24th of December
* a failure 48 hours before some critical deadline
* an accidental sudo rm -rf /mnt/volume_name/*
* a power supply failure that takes out the motherboard
* a thunder strike in the immediate neighbourhood that damages not only the power supply, but also some other components
 
  • Like
Reactions: Xam

j_r0dd

Contributor
Joined
Jan 26, 2015
Messages
134
I just wanted throw out my experience with those drives since I'm using 10 of them myself. I chose them over reds for the 5 year warranty, plus 4TB Reds had awful reviews on newegg at the time of purchase for high failure rates. The SE's are excellent drives, but make sure you have good airflow. It took some fine tuning of the fan speeds to get a balance between keeping the drives cool and the noise down. A couple drives peak at 41 degrees while running a scrub. Not going to lose any sleep over that.
 
  • Like
Reactions: Xam

Xam

Dabbler
Joined
Aug 31, 2015
Messages
17
Thanks everyone for sharing your thought on this subject :)

@Bidule0hm
Thanks for clearing up the calculator question and will post the thread link from now on :)
By the way...for the past 6 months, i've been using the calculator everyday, I'm basing my storage decisions on it... THANK YOU for creating a great tool :)

@Ericloewe
I've sent an email to the guys @ Icy Box...still waiting for a reply :) I'll be able to make my decision after that about the case or the cages.

@j_r0dd
I know about the heating problems of the seagates, that's why I plan to stuff the case with fans especially in the front were all the drives will be...i dont care about the noise as the NAS will be in another, cooler room :)

@solarisguy
At this point i'm not going to get into such details as this NAS will be just a transfer box for a project i'm working on.
After i'm done building this one, i will make the first steps in getting a 48 drive FreeNAS box that'll act as a backup box, but let me answer your questions:

* a failure on the 24th of December
- HDD - will have hot (or cold - haven't decided yet) swap
- RAM - it's supposed to be ECC...what kinda failure should i have to expect?! :(
- Mobo - yeah...i don't really know how to respond to this, but software wise, the mobo should work from the very beginning...let's put it this another way...how many people out there have an extra one laying around for backup?

* a failure 48 hours before some critical deadline
- hardware failure: parts are always replaceable no matter what...even if i had to put another set of RAM bought from a pc shop down the street, i'll do it as a temporary measure until my new RAM set would arrive, if this solution is not going to affect the data on the storage.
- software failure: that's...that's going to be a big issue, as the data on the drives would be encrypted.

* an accidental sudo rm -rf /mnt/volume_name/*
- no one else has access to the NAS expect me...and before i'm going to use the CLI, i always (no exceptions) triple check any and all commands issued :) force of habit :)

* a power supply failure that takes out the motherboard
- this is starting to feel like the Kobayashi Maru...the motherboard must be fine! it's not going to affect the data (or so i think/hope)

* a thunder strike in the immediate neighbourhood that damages not only the power supply, but also some other components
- UPS + overcharge protection + good wiring (had it done this summer) + grounding on every wallplug + another UPS that's in the server cabinet :) I think i have this part covered... :)

All things put asside, there's no big deal if i have to power down the NAS for a day or two...even with the projects i work on :)
But i'm highly sceptical that i'll have any issues with the box in the first 5 years (hard drives not included :D )
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Thanks for your thanks :D
 
  • Like
Reactions: Xam

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
* an accidental sudo rm -rf /mnt/volume_name/*
- no one else has access to the NAS expect me...and before i'm going to use the CLI, i always (no exceptions) triple check any and all commands issued :) force of habit :)
I'm pretty sure no one ever actually types that into the CLI. Most cases where something monumentally stupid happens (like attempts to rm -rf /) are because of subtle, or not so subtle, errors in custom scripts. Not enough QA - whoops!
 
  • Like
Reactions: Xam

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
@Xam, if you can live with an unplanned downtime, that is good both to your mental health and budget :D

I liked your answers. Some comments.

If you go for ECC RAM, one seldom can get such memory at a local store with PC parts (mine has only one type that fits the only ECC capable CPU/motherboard combo they sell). Any electronic component can fail. With ECC modules, at least you have a chance to get notifications that would allow you to schedule a replacement. Also thanks to ECC RAM, some weird hardware errors that are normally attributed to one of the three components: CPU, motherboard and RAM can be seen as clearly either coming from RAM or not. Would your system be able to run when half of its memory modules is taken out?

Unless business cost versus risk analysis indicates that it is proper, people/businesses do not stock the complete set of replacement parts. They adjust their buying decisions instead. That is when faced with a choice, they select the most popular part. Let me give you a fake example. Upon selecting the case and the motherboard, it turns out that there is only one line of PSUs that due to its unique set of modular cables perfectly fits the design. It is significantly cheaper than any possible competition and made in Switzerland. The warranty replacement means that you have to ship it to them, and they repair it and ship it back to you within 30 days of receiving the faulty unit...

Hard drives from any company fail. But you had already planned for hot spares.

If you do not plan on stocking of a spare HBA card (already flashed and tested), you may want each month to scan for possible replacement sources...
 

Xam

Dabbler
Joined
Aug 31, 2015
Messages
17
@solarisguy

Any electronic component can fail. With ECC modules, at least you have a chance to get notifications that would allow you to schedule a replacement
I know...first mistake was to get the project going cause i could find ECC RAM...but i've searched harder and i have a friend of mine working for an IT shop that can get ECC RAM pretty fast (1-2 days) if necessary.

Would your system be able to run when half of its memory modules is taken out?
Most likely it will... again this storage server will be a transferbox for my work projects...it's secondary will be media box basically (plex) and storage for the 5TB-ish of photos i made... :)

Unless business cost versus risk analysis ... and ship it back to you within 30 days of receiving the faulty unit
Who buys relatively unique hardware when there's a world full of manufacturers (HP, Dell, SuperMicro, etc) and suppliers (Amazon, eBay, etc)? :)

But you had already planned for hot spares
Always have, always will...saved my neck a few times and the build will be based on RAID-Z3...i must be really "lucky" if the all the 5 disks for safety (3 disks from RAID-Z3 and another 2 from hot/cold swap), fail... :)
 

Xam

Dabbler
Joined
Aug 31, 2015
Messages
17
Hi guys,

Took me a while...but the list is kinda done...i just need to find the most risk free way of connecting 15 SATA drives to a MB that has 6 SATA ports ^_^

So...here goes:

CASE: 1x Nanoxia Deep Silence 6 Dark Black rev. B
MOBO: 1x Supermicro X10SLM+-F
CPU: 1x Intel Xeon Quad-Core E3-1246 v3 3.5GHz
RAM: 4x DELL ECC UDIMM DDR3 8GB 1600MHz Dual Rank Low-Voltage
HDD: 15x Seagate NAS Enterprise HDD 6TB 7200RPM 128MB SATA-III - 10 of them used for vdev #1 made out of 10 out of 15 hdd's (33tb of total usable space in RAID-Z3)
PSU: 1x Corsair HXi Series HX850i - did the math and should hold and still have enough power extra
Extra1: 2x Transcend Jetflash 520 16GB Silver - used for os load (setup as mirrored)
Extra2: 1x 5 Bay EZ-Tray 3.5" SATA Hard Drive Hot-Swap Backplane Cage in 3x 5.25" Bay - used for vdev #2 made out of 5 out of the 15 hdd's (9.5tb of total usable space in RAID-Z3)

I decided not to go with spares as drives can be found really quick (same day even)...

Feedback? :D
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Why the enterprise drives?

I hope the vdevs aren't in the same pool, because it's not recommended to have vdevs with different number of drives in the same pool.
 

Xam

Dabbler
Joined
Aug 31, 2015
Messages
17
@Bidule0hm

No they are not in the same pool :)
And I've chosen the enterprise drives because (mostly) of reliability... :)

@solarisguy

The backplane cage has fans included :)
Regarding the top shelf...you are kinda right... :| Haven't noticed this until now...I'm gonna have to find a solution to this...
Regarding the HBA...i was actually thinking of having two of them (if possible)...Then again, if it's not possible to have two (dont know why it wouldn't be) i'm gonna take only one and drop a drive from the second vdev :) the second is not so "mission critical" therefore the size won't really matter for it.

Actually...
That's exactly what i'm going to do...
Go with the 14x hdd's and the top shelf won't be used :) There...problem half solved :D
Now all i have to do is find a HBA :)

What do you guys think? :)
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
And I've chosen the enterprise drives because (mostly) of reliability...

Yep, but they are 7200 RPM drives so they need more cooling, they suck more power and they are noisier. The Seagate NAS and WD Reds already have a pretty long warranty at a much lower price ;)
 

Xam

Dabbler
Joined
Aug 31, 2015
Messages
17
Regards to the cooling...i'm gonna stuff that case with fans and have a decent airflow...so...cooling isn't going to be a problem.

As for the power, I've calculated that around 300W would be needed to power the drives while being used...as they have approximately 12w of usage each.

Also, noise is not going to be an issue because of 2 things:
1. case is noise prof :)
2. case will stay in another room with the rest of the hardware :)
 
Last edited:

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Place a hard drive in the top shelf in the 3.5" bay. Either a spare, leaving it unconnected or a dead one. A dead one would be enough, as its purpose is to improve the airflow pattern.

Consider having two pools, each with a single vdev, instead of having a single pool with two vdevs. I know, with 14 disks at your disposal, that is not easy to come up with a good design. However, that is not easy regardless whether you are going to have single pool or two pools.
 

Xam

Dabbler
Joined
Aug 31, 2015
Messages
17
Good idea :) I think i will do just that...after i'm actually building the system... :P

But i have one big ass problem...can't seem to find a HBA... :|

Regarding the pools...why two pools with one vdev each is better then one pool with two vdev? Somehow i'm missing the answer... :|
 
Status
Not open for further replies.
Top