SMB & CPU for fast up/download on windows

Status
Not open for further replies.

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
Hey

Can any1 maybe tell me if e3-1260L could produce 4gb/s upload/download on SMB for windows? I will use m.2 in raid for cache and I hope to saturate 4x10gbe connections.

Regards
Dariusz
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
You might want to provide more information about your current configuration and what you're planning to do.

I also suggest reading up about SLOG and L2ARC.
 

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
Hey

Sorry for late reply. I was going over all the FreeNAS things and hardware issues I had lately.

Right so back to the cache.

Network + workload
Network:
10x render node, 2gb each connection aggregated
1x Workstation , 2x 10gb sfp+ connection aggregated
switch 4x 10gb sfp+ connection - T1700G-28TQ - I hope I can use it to make aggregated connections.
NAS 2x 10gb sfp+ connection aggregated

Network work
File size 5-20gb
File count - 1-5000
This would be transferred from nas to 10 render nodes.

Local work
file size 5-10gb
File Count 5-5000
This would be transferred from nas to Workstation via hopefully 2x 10gb aggregated.

What I have/will have.

Riad-z2, 5x 6tb hdd.
1x 512gb m.2 sm951
1x 512gb m.2 sm961
The idea is to have
2x 512gb in mirror for read/write cache. I hope that this will be "save" enough for power outage/or failed m.2 during file transfer - so data is protected. + I will be able to get 1.5gb-2gb read write. I think it will be limited to sm951 speed as its a tad slower than the new updated one.

The problem I'm facing now is that when I add a new volume I can either add log or cache. There is no way to have both on 1 driver? I'm bit lost. I'm struggling to find tutorials/information on it. People seem to be confusing zil with cache and so on or maybe I'm confused :- (

Any info/links to relevant information would be great. I'm going over https://drive.google.com/file/d/0BzHapVfrocfwblFvMVdvQ2ZqTGM/view but I'm failing to understand it...

Edit over a course of day when working on projects as I might work on 3 projects at the same time I might transfer 100-300gb of nique data. So I think 512 should be enough?

Regards
Dariusz
 
Last edited by a moderator:

Pezo

Explorer
Joined
Jan 17, 2015
Messages
60
With 5 drives you're not going to get GB/s.
Also, if the SM951 doesn't have power loss protection you can't use it as an SLOG.
 

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
With 5 drives you're not going to get GB/s.
Also, if the SM951 doesn't have power loss protection you can't use it as an SLOG.

Hey

Thanks for replay.
Yes I know 5hdds won't give me the speed I need. That's why I want to use m.2 for the cache. I know I can use it as l2arc which is a secondary cache after built in ram cache. But can I set it up to be read&write or its only read? How can I set up write cache? - Its SLOG correct?

If I add volume I can only do Cache(l2arc) and log(zil) -is the log(zil) SLOG ? Is there a way to put them both on 1 m.2 or I need 2x m.2 1 for each system? - I found some tutorials like https://www.penguinpunk.net/blog/freenas-using-one-ssd-for-zil-and-l2arc/ about it but there is no GUI version? Is this an advised way of doing it? Or they should be on separate ssds?

If I set 961 as primary cache and 951 as a mirror. Then even if I get power loss the data should be protected on sm961?- and then copied back on power back up to mirror sm951 Alternatively, I can get another sm961 or 960pro?
 

Pezo

Explorer
Joined
Jan 17, 2015
Messages
60
An SLOG is a sort of cache for writes, yes, but it's only beneficial for sync writes such as with VMs. You're not going to get more write throughput out of it.
An L2ARC will give you more read throughput, but only if you access data that's already cached over and over. Also it costs RAM.

So if you really need those speeds I think you'll have to get a lot more disks.
 

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
An SLOG is a sort of cache for writes, yes, but it's only beneficial for sync writes such as with VMs. You're not going to get more write throughput out of it.
An L2ARC will give you more read throughput, but only if you access data that's already cached over and over. Also it costs RAM.

So if you really need those speeds I think you'll have to get a lot more disks.

Hey
Humh strange. I thought that SLOG would take all the writes 1st and then save slowly on to spinning HDDS/clean SLOG as it saves to HDD?

I will try L2ARC when I get my RMA hdd, will create new Raid-z2 pool with 5 hdds see how that works. When using 2 hdds in stripe I was getting 300-400mb read/write speeds. I Hope that with 5hdds I would hit around 700-800 R/W. L2ARC hopefully will bump it up to 2gb on shared files. Since one 5gb file will be downloaded by 10 render nodes then that should be accelerated. Are there any rules as to what gets stored on L2ARC?
 

Pezo

Explorer
Joined
Jan 17, 2015
Messages
60
ZFS doesn't use the SLOG in that way. I don't understand it well enough that I'm comfortable explaining it to someone else yet ;-)
The L2ARC is an adaptive replacement cache, so a mix of MRU and MFU.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I thought that SLOG would take all the writes 1st and then save slowly on to spinning HDDS
No.

ZFS caches incoming writes in RAM. You can't beat that with any SSD.
However, that's not good enough for sync writes, which must be committed to non-volatile storage before they're acknowledged. So, every pool has a ZFS Intent Log, where sync writes are temporarily stored before being properly written to the filesystem proper. This is slow, so you can add an SLOG device to offload this task from the pool.

Are there any rules as to what gets stored on L2ARC?
Whatever gets evicted from ARC.
 

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
No.

ZFS caches incoming writes in RAM. You can't beat that with any SSD.
However, that's not good enough for sync writes, which must be committed to non-volatile storage before they're acknowledged. So, every pool has a ZFS Intent Log, where sync writes are temporarily stored before being properly written to the filesystem proper. This is slow, so you can add an SLOG device to offload this task from the pool.


Whatever gets evicted from ARC.
Interesting thanks!
So m.2 for SLOG will help in this case. When writing to interlan ZFS log, is there a size limit to this log? I suppose the path of file is client PC> ZFS Internal LOG > ZFS Pool HDD. With SLog its Client PC> ZFS internal LOG> SLOG>ZFS Pool HDD? If I'm writing 40gb file to nas with 16gb ram, the file will go directly to SLOG correct? Or hes going to chunk it up in to pieces to fit in ram piece at a time?

WIth L2ARC So if a file is 30gb big, it will go directly to l2arc then to client PC?

Does file that take 1gb on HDD, also takes 1gb in ram or are they differently compressed?

Thanks!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So m.2 for SLOG will help in this case
No, it won't. There are no M.2 SSDs with power loss protection, which is essential for SLOG applications.

I suppose the path of file is client PC> ZFS Internal LOG > ZFS Pool HDD.
No. The ZIL is never read from unless something went wrong. During normal operation, the cache in RAM is used.

If I'm writing 40gb file to nas with 16gb ram, the file will go directly to SLOG correct?
No.

WIth L2ARC So if a file is 30gb big, it will go directly to l2arc then to client PC?
That's vaguely what might happen. But with 16GB of RAM, L2ARC will make things slower, rather than faster.
 

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
No, it won't. There are no M.2 SSDs with power loss protection, which is essential for SLOG applications.
I could try going with UPS? But then if there is power outage then everything in my home would lose power so I'd lose file either way? The SLOG don't get clean up on boot does it, so whatever was copied will still be moved down to pool?
No. The ZIL is never read from unless something went wrong. During normal operation, the cache in RAM is used.
I mean with write. If I'm saving data to NAS. Then Its Client PC> ZIL or SLOG if its available > ZFS Pool
Humh why would it not save to SLOG before saving to internal Pool? Zil is internal "temporary" something where files gets saved 1st then moved to driver hdd, so if there is SLOG, would that not be used instead for the added benefit of speed?

That's vaguely what might happen. But with 16GB of RAM, L2ARC will make things slower, rather than faster.
Mmmm will probably just have to test and see.

This entire r/w cache system seems to be a lot less "productive" than I thought initially... I was hoping before that I can just drop 1tb read/write and that will be nicely utilized by FreeNAS but looks like its a lot less...
 
Last edited by a moderator:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You really should read up on this, I can't replicate the whole SLOG document here.

The important part you're missing is that SLOG only receives sync writes and only a few seconds of writes.
 

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
You really should read up on this, I can't replicate the whole SLOG document here.

The important part you're missing is that SLOG only receives sync writes and only a few seconds of writes.
Well I did read it, does not mean I understand it correctly.

From what I can tell it writes data every 5 seconds to hdd. So a 10gb connection has something in range of 26gb which then needs to get transfered to hdd. More or less? Not sure, If I'm going to write 100gb to nas via 10 seconds via 20gbs conneciton then I need another few mins to copy these 100gb from SLOG to hdds, so bigger slog - better as I can pack more and write it over time?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So a 10gb connection has something in range of 26gb which then needs to get transfered to hdd.
More like 6-7.

More or less? Not sure, If I'm going to write 100gb to nas via 10 seconds via 20gbs conneciton then I need another few mins to copy these 100gb from SLOG to hdds, so bigger slog - better as I can pack more and write it over time?
No, you can't arbitrarily pile up TXGs. They have to be flushed before more are available (well, I think you can have one plus the one being flushed). SLOG size is mostly irrelevant. The only reason you can't throw a tiny SSD at the problem is that its performance would be atrocious.
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
When writing to interlan ZFS log, is there a size limit to this log?
From http://doc.freenas.org/11/zfsprimer.html:

ZFS currently uses 16 GB of space for SLOG. Larger SSDs can be installed, but the extra space will not be used. SLOG devices cannot be shared between pools. Each pool requires a separate SLOG device. Bandwidth and throughput limitations require that a SLOG device must only be used for this single purpose. Do not attempt to add other caching functions on the same SSD, or performance will suffer.
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
I could try going with UPS? But then if there is power outage then everything in my home would lose power so I'd lose file either way? The SLOG don't get clean up on boot does it, so whatever was copied will still be moved down to pool?

First, you should have a UPS anyway. I feel that power loss protection on SSDs is overemphasized, and the need for a UPS, which covers most of the same situations, is underemphasized.

I am fairly certain that ZFS is smart enough to not write stale data out of a SLOG after a reboot.
 

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
Back to the sLog/L2arc research.

So the more I read the more mixed info I get. On one had I saw some info where people tweaked sLog from 5 seconds to 10-20 to allow more data to be dropped in/out and managed.

Currently to follow the rules of L2Arc 1gb ram = 1tb hdd. I run 5x6tb hdds in raid-z2. Meaning I have around 16tb of space to use. Seeing as I have 16gb of ram I should be almost fine with that rule. I guess I have something in range of 700-800mb per 1tb, or maybe I'm way out if I need the 25gb for 25tb of total raw space... no idea? is it 1gb=1tb per usable space or total space?

I looked at TrueNAS X10 which has 400gb or Read cache but only 32gb of ram and capacity of around 360tb. Not sure but it doesn't look like TrueNAS follow the rule of 1gb ram = 1tb of hdd as that X10 only has 32 gb of ram ? - unless I missed something. It can also have added read/write caches which I wonder even more about?

Which makes me wonder if there is an error somewhere ? Or maybe the performance hit is not as bad as people keep telling me it is? Or do they use some crazy flash drivers? No idea but I'm curious. I'm not finding much details of the X10 system anywhere... Or maybe TrueNAS has something FreeNAS does not in this area?


No, you can't arbitrarily pile up TXGs. They have to be flushed before more are available (well, I think you can have one plus the one being flushed). SLOG size is mostly irrelevant. The only reason you can't throw a tiny SSD at the problem is that its performance would be atrocious.

For now if say I take sLog and configure it to sync every 20 seconds, via 20gb connection from 1 pc I should be able to drop in around 40gb in the sLog, and then after 20 seconds they swap, the 40gb is being flused down to HDD and the other sLog fills up another 40gb ? - I know I'm talking about "perfect" transfers. Once the the sLog is maxed out I take the transfer speeds drop from m.2 speed down to standard hdd raid-z2 speeds?

20GB is way more than I need for quick write speed so having sLog of lets say 50gb size should work perfectly. - 2x 20gb for 2 sLogs, + 10gb for spare?

About L2arc - well this is a total mystery to me so far. I'm not running vms or anything like that. At most my project will use 50gb and have few hundred files which will be constatnly access. I do wonder what rules determine if file goes to be stored on Arc or L2Arc. Maybe I can tweak it up to store them quicker on ram than later and thus benefit from arcs more quicker. - I guess that is something in tunables?

Thanks for info and help. I know I keep pushing the "l2arc/sLog" topic a lot but we all have different needs and I'm as always stubborn lol. sorry.


First, you should have a UPS anyway. I feel that power loss protection on SSDs is overemphasized, and the need for a UPS, which covers most of the same situations, is underemphasized.

I'm thinking of getting this UPS > APC Back-UPS ES 700. I run a quick search and it looks like FreeNAS has some kind of UPS control software that I could connect with this ups but not sure yet. Something that needs a lot more researching. But I recon this should allow for 10-15 min battery run to finish file copy before shutting down nas, the nas takes around 70-100W at this moment and this PSU should last 30-60min? I'm fairly sure if power ever goes out then all other pcs will die interrupting file transfers. So its just saving what nas had copied till that time really.
 
Last edited by a moderator:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Currently to follow the rules of L2Arc 1gb ram = 1tb hdd.
What? No, that is going to end badly. You might get away with an L2ARC if you have 32GB of RAM, but not with 16, outside of very weird scenarios.
people tweaked sLog from 5 seconds to 10-20
What... do you gain from that? 5 seconds is still plenty to get a nice, long, sequential write for the TXG.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
No, it won't. There are no M.2 SSDs with power loss protection, which is essential for SLOG applications.

Not actually true. There are m.2 22110 SSDs, but they're hard to get, and they don't fit in M.2 2280 slots.

http://www.tomsitpro.com/articles/sk-hynix-pe3110-enterprise-m.2-ssd-3d-nand-v2,2-1042-2.html
IMG_0316_w_600.jpg


This is a good example of what PLP looks like ;)
 
Last edited:
Status
Not open for further replies.
Top