SSD pool for VMware hosts

Status
Not open for further replies.

compudaze

Cadet
Joined
Mar 22, 2013
Messages
5
I'm looking to expand my FreeNAS install. I currently have 10x1.5TB drives in RaidZ2 for storage. I also have my VM's running off this pool and it's sometimes awfully slow. If I try any I/O intensive tasks, I lose the ability to stream 1080P content from it (buffer city). I want to keep the existing 10x1.5TB pool for storage but want to add another, faster, pool for hosting my VM's where the I/O can be heavier. I currently have 6 SATA ports open, but am willing to buy another HBA for expansion.

Hardware is 6-core AMD 3GHz CPU, 32GB RAM, 2x IBM M1015 HBA's.

I currently have 500GB allocated for my VM's and am only using 150GB. I want to have extra space because I plan on adding several more VM's as well have having temp fast storage before offloading to the storage pool. Finally, to the meat of the post: Which one of these configurations are better and why? Do you have any other recommendations?

2x500GB SSD Mirror
4x250GB SSD Raid10
3x250GB SSD RaidZ
8x120GB SSD Raid10
5x120GB SSD RaidZ

Would it be better to add smaller SSD pool for Cache or LOG instead?

Thanks for your help!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Are you doing NFS or iSCSI? NFS with VMware can be stressy on writes because of all the sync write calls. ZFS also has a tendency to saturate the pool under certain circumstances, and the tuning steps outlined in bug 1531 may offer you somewhat slower but substantially more consistent performance on your existing pool. Your "lose the ability to (read content) from it" problem sounds very similar to some of the issues that led me into 1531.

If your VM operations are mostly read and you are experiencing severe slowness or hangs of your VM's, an increase of ARC would be likely to make it substantially better. With 32GB, you have sufficient ARC to make use of L2ARC. With 150GB in "use", it seems likely that your working set is probably 60GB or less; a 120GB SSD for L2ARC would probably be a comprehensive fix for read performance.

Both of those steps are obviously intended to be less-expensive options to just getting a SSD pool.

However, if you are intent on going all SSD, and with storage costing as little as it does, it seems like it'd be real tempting to get four 240GB SSD's and put them in RAIDZ2.

Basically ZFS doesn't offer "RAID10", but you can have two mirror vdevs in your pool and that'll do something similar to what people often refer to as "RAID10." However, neither a "RAID10" nor a "RAIDZ1" configuration offer more than protection against a single drive failure, and since RAIDZ2 in the proposed config takes the same number of components as "RAID10", then why not get the double protection?
 

compudaze

Cadet
Joined
Mar 22, 2013
Messages
5
I'm using iSCSI with device extents. Will that still be able to take advantage of L2ARC? Thanks!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
As far as I'm aware, there's no significant difference between ARC and L2ARC for that use case.
 
Status
Not open for further replies.
Top