BUILD Need help with 20TB+ NAS system which can be expandable in need

Status
Not open for further replies.

Udmr

Cadet
Joined
Jan 30, 2015
Messages
6
Hello!
I'll be fast with introduction since this will be a long one to read.
I work in a production company. I'm a software engineer and for reasons I can't explain, I'm expected to handle our data and its safety.

I'm extremely new to the NAS bussiness. I've been reading guides, tips & tricks etc for 2 weeks already and all of those stuff made me uncomfortably confused since I don't have any experience with FreeNAS or any other system which requires integrity/safety of data. I'm not even sure if what I need is NAS.

I'm currently using some of my leftover hardware and 2 weeks ago we lost an external disk.
29oo76o.png


Now, because of this, I'm extremely paranoid. Maybe more than normal but since the data is extremely important and I'm the one who's responsible, I'm trying to find the one which has best price/performance solution. Please don't judge me about having such unbalanced and crappy PC since it's what I'm allowed to have. I'm not experienced system admin but I've been working with 3 centos and 2 ubuntu server for a year as web/mail/db server and load balancer so I know my way in SSH but have little to no idea about FreeBSD. I did the "RTFG" first of course. I have read a lot of guides including "FreeNAS Guide 9.2.1".

Here are my main problems:
  • I currently have 16TB of irreplaceable, irrecoverable and extremely important data. There is also around 2TB per month data creation which requires to be safe at all times for at least 3 years. Tape and DVD/BlueRAY backup is out of question because of the big data.
  • We have 10 PC and only 3 have the rights to access the data "at the same time" for standard file operations (Copy/paste). No one will be using the data for streaming/continuous usage.
  • The server will need to run 20 hours a day at max performance mostly for 5 days a week. It won't run 7/24. It doesn't need to be power efficient or silent. I'm planning to do the maintenance work when the server is not in use the rest of the time
  • We have 1Gbit network so anything read/write amount better than 100MB/s will be luxury for me. I'm also not planning to be in need of hot swaping harddisks.
For starters, I'll be buying most of the hardware from Amazon since I only have major brand (HP,IBM,DELL) enterprise grade expensive products where I live. I'll probably have to pay nearly the same amount of hardware money as import fee and tax cost so price is important but as long as its necessary, I'm willing to spend. And yes, major brands will still be more expensive than what I have to pay to Amazon.

What I'm trying to achieve is having at least 20TB Raw, safe (RAID 6 maybe?) area which is easily expandable (More drives or DAS with external SAS?) in need.

To be honest, I was planning to buy Norco RPC-4224 with the mainboard and cpu in the picture, increase the RAM to 32GB, buy an LSI SAS 9211-8i for storage what I currently have with 8x 4TB WD Red HDD on RAID 6, copy everything to there and expand it with Intel RES2SV240 Expander card when I need more space.
Without expander, I'll have 24TB with RAID 6 which will suffice for 2 months. But after reading the guides I have doubts about this setup. Still I'm willing to take the risks for "1 in a Million" chance failures. I'm open to any solution.

For the end, thanks for reading and sorry for the wall of text.
 

stefanb

Patron
Joined
Dec 12, 2014
Messages
200
Hi,

what kind of workload are the 2TB per month?
S.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
A NAS has to have ECC RAM. Your system doesn't support that and would be underpowered for that many data anyway.

Now, I'll be planning for an enterprise-grade storage-server which can hold 88TiB of data easily (16TiB existing plus 36 months of 2TiB each):

1x http://www.supermicro.com/products/system/4U/5048/SSG-5048R-E1CR36L.cfm Barebone, just drop in CPU, RAM and disks - it looks expensive at first look, but considering the SAS Controllers, cables, redundant PSU etc it's about the same price as you would build your own. 4xGbe ports can help you via LAGG, meaning with multiple clients you'll get an aggregated troughput of 4Gbps.
1x Xeon E5-1620 v3 or E5-1650 v3 (if you can spend it, good, won't hurt.)
4x 16GB DIMMs: Samsung M393A2G40DB0-CPB - you may need to upgrade to 8x16GB DIMMs in case you experience low performance
6x SAS HDDs Seagate Enterprise Capacity 6TB (example model# ST6000NM0014) or 8TB HDDs for higher density: HGST He8 (example model# HUH728080AL420y)

This way you can expand your storage by 24TB (6TB) or 32TB (8TB) trough adding 6 further disks, up to the maximum of 36 HDDs, totaling an usable capacity with the recommended limit of 80% of 104TiB using 6TB disks or 139TiB using 8TB disks.
 

Udmr

Cadet
Joined
Jan 30, 2015
Messages
6
Hello,

It's mainly video files like avi, mov, mts and mp4s, some sound files like wav, aiff and ogg. Most of them are uncompressed and I prefer them this way because we dont have enough CPU power for anything except project rendering. There are also After Effects and Premiere projects mixed with mp4's and other Adobe product files.

Thanks!
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Enabling lz4 compression doesn't even have a big performance impact. You won't save much with lz4, but it doesn't hurt either.

As long as your software supports FreeBSD, you might get a transcoding/rendering server up and running in a FreeNAS jail. For that use-case I'd probably spec up a Dual-CPU system.
 

stefanb

Patron
Joined
Dec 12, 2014
Messages
200
Hi,

@marbus90, there is no need to transcode the video files. Classic file server role to store video content.

all video formats and most of the audio ones are compressed by itself. The ratio will be very small, so calculate without compression.
As I understand, the NAS will be the "first" backup instance, right? Projects are stored on the workstations and when project is finished, the files are moved to the NAS?
Or are the projects on the NAS all the time in the creation process?

I think a pool of 3 raidz2 with 6x8TB would be a possible solution.
The costs/MB are nearly the same.

You can start with 1 vdev 6x8TB (about 32GB usable space in Z2) and expand the NAS by adding additional vdevs.

Spend some time to think about the complete concept. Even if it is a secure system when planned right, you could loose all the data at once.
What about the case of fire, burglary, ...?

S.
 

Udmr

Cadet
Joined
Jan 30, 2015
Messages
6
What about the case of fire, burglary, ...?

I think this one is the "1 in a Million chance" kind of thing :D

Other than that, projects will be stored in NAS for archive purposes. NAS will be the first and the only backup instance which is why I'm trying to make it as safe as possible without adding another NAS.
Is my hardware plan good enough for a project like this? I've read a lot and been thinking about it so much, I started to doubt myself.

Thanks.
 

stefanb

Patron
Joined
Dec 12, 2014
Messages
200
Ok, you plan without any backup?!
Cheap NAS:
HP ProLiant MicroServer Gen8, Pentium G2020T, about 300€
2x 8GB ECC, 180€ and 4x8TB HDD in RAID Z2 (16TB usable) about 1000€

Buy as much as you need one for each year ;)

S.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
there is no need to transcode the video files. Classic file server role to store video content.
Of course there is no need to transcode.

But OP says he does much _rendering_. Instead of having a big CPU _idling_, _depending on the software_ its possible to run a rendering server in a jail and utilize the servers CPU for that.

Udmr: NO, your hardware plan IS NOT good. Look at post #3 in this very thread.

Microservers are a bad idea for this project. It looks nice initially, but for 36 HDD bays you'll pay ~4300EUR. The Supermicro Enterprise solution is cheaper than that and on top you'll have 24 instead of 18 HDDs usable, plus a single big pool instead of 9 servers with a 16TB pool each.
 
Last edited:

stefanb

Patron
Joined
Dec 12, 2014
Messages
200
Hi,
@marbus90
his rendering means: "export" the AfterEffects or Premiere Projects as video file. CGI based images (3DSMAX, Maja, Cinema4D) is also rendering.
The workstations/render nodes are doing this job in both cases.
Results are stored on Fileserver.

S.
 

stefanb

Patron
Joined
Dec 12, 2014
Messages
200
@Udmr:
How much is yout budget?
<2000€/$ <5000€/$ >5000€/$, >10000€/$, >20000€/$?

S.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
None of the software you mentioned was stated by OP. But well, let's forget about rendering and concentrate on a reliable solution for a single point of backup.

OP, I take it you already have an UPS planned for the server?
 

stefanb

Patron
Joined
Dec 12, 2014
Messages
200
@marbus90:
He did, but not in the first post.
Hello,

It's mainly video files like avi, mov, mts and mp4s, some sound files like wav, aiff and ogg. Most of them are uncompressed and I prefer them this way because we dont have enough CPU power for anything except project rendering. There are also After Effects and Premiere projects mixed with mp4's and other Adobe product files.

Thanks!
 

Udmr

Cadet
Joined
Jan 30, 2015
Messages
6
How much is yout budget?

It's as much as it needs to be. For example I'm ok with buying 8x 4TB WD RED but I'm against buying 35x WD Xe 900GB for same amount of storage for more money just because it has better component/speed etc. I'm trying to get the cheapest NAS which have enough power to do what I need and is not an overqualified hardware. I can go and buy synology/qnap but I don't think they have what I need. I need more flexiblity and less complexity which is why I'm thinking to go with FreeNAS. From what I understand of course.

I take it you already have an UPS planned for the server?

We have one 1000VA UPS for every 2 workstations so yes I have a brand new UPS ready for the server.

Cheap NAS:
HP ProLiant MicroServer Gen8, Pentium G2020T, about 300€
2x 8GB ECC, 180€ and 4x8TB HDD in RAID Z2 (16TB usable) about 1000€

Buy as much as you need one for each year ;)

I looked at it but I have no idea how to connect that many server as single source since I'll need at least 2 of it. But thanks for the heads up!
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
You don't need 10.000rpm disks like the Xe series. However, at that size you'd need SAS Expanders to connect all the disks, and then we're back at post #3 with enterprise-grade SAS disks. SATA disks work, but you can encounter weird reliability issues. The 7200rpm SAS prices are not totally trough the roof. It does not improve performance, it improves reliability, which is what you're after. Reds are not really feasible for such an Enterprise server. They are only certified for systems with up to 8 drives. Since it's not a home toy but a business server, forget about Reds.

And yes, at that scale multiple single boxes aren't feasible. A single server could comfortably hold 312-384TB of data (depending on vdev configuration) with 6TB drives and up to 512TB with 8TB drives, spread over 36 bays in the server chassis and another 44 bays in a JBOD chassis. Above that I'd start looking into clusterized solutions. Also, if you plan beyond 36 drives, I'd start on a Dual-CPU barebone like the http://www.supermicro.com/products/system/4U/6048/SSG-6048R-E1CR36N.cfm and start with a E5-2620 v3 CPU instead of a 1620.

For a FreeNAS server you'll have to follow some design guidelines, i.e. 1GB RAM per 1TB of storage, ECC memory, Intel/LSI chipsets and such. The barebone matches those requirements quite good. Forget about anything you have available. Another option includes contacting ixsystems, they can provide you a purpose-built FreeNAS system and - more important - a throat to choke in case something failed. They'll ship worldwide.

A 1000VA UPS would really cut it close if that 36bay server is fully equipped, but since you want to start with less disks, it's okay for the start.
 

Udmr

Cadet
Joined
Jan 30, 2015
Messages
6
You don't need 10.000rpm disks like the Xe series. However, at that size you'd need SAS Expanders to connect all the disks, and then we're back at post #3 with enterprise-grade SAS disks. SATA disks work, but you can encounter weird reliability issues. The 7200rpm SAS prices are not totally trough the roof. It does not improve performance, it improves reliability, which is what you're after. Reds are not really feasible for such an Enterprise server. They are only certified for systems with up to 8 drives. Since it's not a home toy but a business server, forget about Reds.

And yes, at that scale multiple single boxes aren't feasible. A single server could comfortably hold 312-384TB of data (depending on vdev configuration) with 6TB drives and up to 512TB with 8TB drives, spread over 36 bays in the server chassis and another 44 bays in a JBOD chassis. Above that I'd start looking into clusterized solutions. Also, if you plan beyond 36 drives, I'd start on a Dual-CPU barebone like the http://www.supermicro.com/products/system/4U/6048/SSG-6048R-E1CR36N.cfm and start with a E5-2620 v3 CPU instead of a 1620.

For a FreeNAS server you'll have to follow some design guidelines, i.e. 1GB RAM per 1TB of storage, ECC memory, Intel/LSI chipsets and such. The barebone matches those requirements quite good. Forget about anything you have available. Another option includes contacting ixsystems, they can provide you a purpose-built FreeNAS system and - more important - a throat to choke in case something failed. They'll ship worldwide.

A 1000VA UPS would really cut it close if that 36bay server is fully equipped, but since you want to start with less disks, it's okay for the start.

Thank you so much! I really got the info I need. I know you said it already but I was in need of an explanation. I got what I need but I'll keep watching this topic in case someone says something I need to know. Thanks everyone!
 

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
You don't need 10.000rpm disks like the Xe series. However, at that size you'd need SAS Expanders to connect all the disks, and then we're back at post #3 with enterprise-grade SAS disks. SATA disks work, but you can encounter weird reliability issues. The 7200rpm SAS prices are not totally trough the roof. It does not improve performance, it improves reliability, which is what you're after. Reds are not really feasible for such an Enterprise server. They are only certified for systems with up to 8 drives. Since it's not a home toy but a business server, forget about Reds.

And yes, at that scale multiple single boxes aren't feasible. A single server could comfortably hold 312-384TB of data (depending on vdev configuration) with 6TB drives and up to 512TB with 8TB drives, spread over 36 bays in the server chassis and another 44 bays in a JBOD chassis. Above that I'd start looking into clusterized solutions. Also, if you plan beyond 36 drives, I'd start on a Dual-CPU barebone like the http://www.supermicro.com/products/system/4U/6048/SSG-6048R-E1CR36N.cfm and start with a E5-2620 v3 CPU instead of a 1620.

For a FreeNAS server you'll have to follow some design guidelines, i.e. 1GB RAM per 1TB of storage, ECC memory, Intel/LSI chipsets and such. The barebone matches those requirements quite good. Forget about anything you have available. Another option includes contacting ixsystems, they can provide you a purpose-built FreeNAS system and - more important - a throat to choke in case something failed. They'll ship worldwide.

A 1000VA UPS would really cut it close if that 36bay server is fully equipped, but since you want to start with less disks, it's okay for the start.
This is all good advice. I have two of the previous generation E1R36N's (replaced the hardware raid cards with LSI 9207 HBAs) and I have not had any hardware issues in the nearly two years they've been running. I haven't even had a drive fail. If anyone cares I'm using Seagate Constellation ES.2 ST33000650SS SAS drives. Yes, they're expensive and only 3 TB but I would not hesitate to use them again.

If I were to do it again, I'd get the current L version of this chassis, the http://www.supermicro.com/products/system/4U/6048/SSG-6048R-E1CR36L.cfm. The E1R36L has an HBA instead of a hardware RAID card you won't need, and there are only 16 DIMM slots instead of 24 in the N version. Using (relatively) affordable 16 GB DIMMs this would cap you at 256 GB memory. That could theoretically be a limitation, but I suspect not in the OP's case where there are only a few concurrent users. In my case most of the allocated storage is for VMware virtual machines and direct-access iSCSI, and I've never come close to 256 GB. I don't think I've ever topped 128 GB.

For 36 drives I ended up using 6 x 6-drive RAIDZ2 VDEVs. Going to 5 x 7-drive RAIDZ3 VDEVs would provide more redundancy but at the cost of efficiency and performance. But anyone really concerned about safety would be better off buying two of these boxes and replicating all data from the primary to the backup. For anyone used to NetApp or EMC pricing, this is pocket change.
 

Udmr

Cadet
Joined
Jan 30, 2015
Messages
6
Well, after a week I learned a lot of stuff.
I didn't want to open a new post so here is my hard choice:

Setup 1
Chasis = SC847E16-R1K28LPB
MoBo = X10SRI-F
CPU = E5-2620V3
RAM = 4x 8GB DDR4 2133Mhz 1.2V ECC REG
Controller = LSI SAS 9207-8i

Setup 2
Chasis = SC847E16-R1K28LPB
MoBo = ASUS Z9PA-D8
CPU = E5-2620V2
RAM = 4x 8GB DDR3 1600Mhz ECC REG
Controller = LSI SAS 9207-8i

Both setups have same prices and drives will be 30x 3TB WD Se. I'm planning to create 3x ZPool with 1x RAIDZ2 vdev which has 10 HDD. Max concurrent user number is still 3. Which one should I choose or upgrade? I'm open to anything. Also server will not do anything with jails etc.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I'd never do setup 2 because of the motherboard. Stick to what everyone else uses.
 
Status
Not open for further replies.
Top