Mirror hard disk with SSD - improved performance?

Status
Not open for further replies.

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Posted in OT since it's dealing with a Proxmox host using ZFS on Linux...

I host a number of VMs on one blade of a Dell C6100 with 2 X5650s, 48 GB RAM, and a mirrored pair of two 2 TB WD Black disks. The system is pretty badly I/O-bound, especially when two or three VMs are trying to start up at the same time (as is the case at system boot time). So, I've been thinking for a while of trying to replace the disks with SSDs.

Now I see that 2 TB SATA SSDs are available, which would make the process relatively straightforward--install them into 2.5-3.5 adapters, replace the existing disks one at a time, and off I go. The problem is that 2 TB SSDs are expensive. So, I'm wondering how much improvement I'd see if I just replaced one of the hard drives for the time being, doing the second one later. I wouldn't think it would help writes a whole lot (they still have to be written to both devices), but it could potentially do a lot of good for reads. Thoughts?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's an interesting question. I guess it depends on how exactly ZFS distributes reads on crazy-asymmetric mirrors.

With compressed ARC, I'm wondering if an L2ARC might be a viable option to improve your situation, by offloading as many reads as possible to a separate device.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I don't know about performance, but I do essentially that on both my desktop and media server;

Desktop - 64GB SATA DOM + 500GB spinning HDD
Media server - 1TB mSATA + 2TB spinning HDD

For the desktop, I take a 64GB partition out of the 500GB and use that as the mirror for the OS.
In the Media server's case, I take a 25GB partition on each device and use that for my root pool.

Is there some tool I can use to check out the performance?

Note both are Gentoo Linux using ZFS for both root pools and media pools, (which is un-mirrored).
 
Status
Not open for further replies.
Top