Have so seen an improvement in read-throughput yet? :)
Unless maybe you've got 10GbE you will not see an improvement in read throughput. A single SATA disk should be able to saturate a 1GbE link effectively.
L2ARC is basically about reducing latency, which in this case means disk seeks.
First, L2ARC tends to avoid caching long stretches of sequentially accessed data. Those can be pulled from a pool fairly efficiently so that makes sense.
Next, a single seek operation really shouldn't take long anyways, unless you're already severely in pain (takes several seconds or more to open a short file). That's actually probably the situation where a user would actually notice. But a lot of times, an L2ARC will start working and the effect is more or less gradual in nature. As the pool gets busier the L2ARC shoulders more of the seek load. If you're measuring latency, you should see that. But throughput, no, that's probably the wrong question.
I just read on a different "best practice" blog that the ARC/L2ARC ratio should be 1:4 which would be just about it. The 5x-10x mentioned ratio above is just for index data and not overall size layout?
When L2ARC compression arrives for FreeBSD 10 will this formula change?
L2ARC sizing is affected by many factors. Different people have different rules of thumb based on pool defaults, intended use, etc. It is actually best to put in a small L2ARC, look at the result on the workload and the ARC/L2ARC stats, then maybe do a second round and adjust again. But knowing that users don't want to do that, most people agree that 1:4 or 1:5 ought to be "safe" as long as you have sufficient RAM (32GB+). cyberjock will pipe in momentarily with his opinion that you should have minimum 64GB. And that's okay too, because different uses, different experiences, etc. I've got a 60GB L2ARC on a 16GB MicroServer N36L that was doing dedup.