Using iozone to measure FreeNAS performance

Status
Not open for further replies.

leonroy

Explorer
Joined
Jun 15, 2012
Messages
77
After spending a few days playing with dd, sysbench and iozone it's clear that iozone shows a more comprehensive overview of the file system as block and file size varies.

Two items I'm not 100% clear on when using iozone are when to use the -c and the -e option.

From what I gather:

-c
Includes close() in the timing calculations. Commit time for NFS V3 is included in the measurements by including file closure times (“-c”).

-e
Include flush (fsync,fflush) in the timing calculations. This will reduce the NFS V3 client side effects due to caches. This is particularly useful when comparing different platforms, if one wishes to eliminate cache effects and concentrate on other platform differences.

So basically:

If I'm trying to test performance of my whole disk system by running iozone locally ie. gauging cache, memory and the various sub levels before hitting spinning disks I don't want -c or -e. Using those options will simply produce a relatively flat graph.

However if I want to test network performance I will want to use -c and -e to simulate say VMDKs over NFS which forces sync on every write.

Is that correct?
 
D

dlavigne

Guest
Were you able to verify whether or not your assumption is correct?
 

leonroy

Explorer
Joined
Jun 15, 2012
Messages
77
With iozone there were two possible scenarios I wanted run, each requiring different arguments:
  • Local
  • NFS

Local

$> izone -Raz -g 64G -f /mnt/ZFS_VOL/ZFS_DATASET/testfile -b iozone-MY_FILE_SERVER-local-size-64g.xls

NFS

$> iozone -Razc -g 64G -U /mnt/MY_FILE_SERVER -f /mnt/MY_FILE_SERVER/testfile -b iozone-MY_FILE_SERVER-nfs-size-64g.xls

OR

$> iozone -RazcI -g 64G -f /mnt/MY_FILE_SERVER/testfile -b iozone-MY_FILE_SERVER-nfs-size-64g.xls

To explain I set -g (size) to 2x RAM. It takes a LOT longer to test (6-12 hours) but the results are much more useful since they give a nice 3D surface chart which shows the sustained speeds you can expect for a given file size as it hits CPU cache, memory cache, SSD cache and finally spinning disks.

If you don't set the memory size to 2x RAM then you'll only be measuring your cached performance (graph with an upward trajectory) and not the sustained performance (where the graph starts going down). This is fine if your usage of FreeNAS is very bursty but it's not a complete result.

For NFS testing ideally you want to use the first argument which unmounts the NFS share between tests which removes the effect of caching. This requires an fstab entry so the test can mount/unmount successfully. Unfortunately I'd encounter issues with the remount failing after a few tests. So if you encounter that (or can't be bothered to create an fstab entry) use -I which uses DIRECT I/O for all file operations which tells the filesystem that all operations are to bypass the buffer cache and go directly to disk.

If I've made any errors in my understanding of iozone above please someone let me know.

Wrote a blog post on using IOzone here:
http://www.leonroy.com/blog/2015/10/storage-benchmarking/
 
Last edited:

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Just a word of warning. Performance greatly depends on a location on the disk. Assuming that you have modern 4TB or 6TB disks, the difference could be greater than twofold.

Please read from a disk (raw disk device) the first 64GB, calculate the speed, and then compare to the speed when reading from the last 64GB of the disk.
 

leonroy

Explorer
Joined
Jun 15, 2012
Messages
77
An excellent point. Do you have any suggestions for reading from the end or beginning of the disk apart from having it empty/full?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Last edited:

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
An excellent point. Do you have any suggestions for reading from the end or beginning of the disk apart from having it empty/full?


I would use 4096 byte blocks (most disks nowadays do not have 512-byte sectors) or 131072 byte blocks (ZFS uses 128k blocks).

Say you want to read the first 64GB from /dev/ada0
Code:
# dd if=/dev/ada0 bs=4096 count=16777216 of=/dev/null
16777216+0 records in
16777216+0 records out
68719476736 bytes transferred in 628.846596 secs (109278602 bytes/sec)
#
# dd if=/dev/ada0 bs=131072 count=524288 of=/dev/null
524288+0 records in
524288+0 records out
68719476736 bytes transferred in 582.323460 secs (118009116 bytes/sec)
#


I will enlist badblocks and CTRL-C to help me in figuring out how to read the last 64GB of a disk :D. In my examples /dev/ada0 is a slow 1TB disk, /dev/ada1 is a medium performance 6TB disk.
Code:
# badblocks -b 4096 -sv /dev/ada0
Checking blocks 0 to 244190645
Checking for bad blocks (read-only test): ^C0.00% done, 0:00 elapsed. (0/0/0 errors)

Interrupted at block 7584
# bc
244190645-16777216+1
227413430
# dd if=/dev/ada0 bs=4096 iseek=227413430 of=/dev/null
16777216+0 records in
16777216+0 records out
68719476736 bytes transferred in 1179.37676 secs (58267634 bytes/sec)
#
Code:
# badblocks -b 4096 -sv /dev/ada1
Checking blocks 0 to 1465130645
Checking for bad blocks (read-only test): ^C0.00% done, 0:00 elapsed. (0/0/0 errors)

Interrupted at block 5888
# bc
1465130645-16777216+1
1448353430
# dd if=/dev/ada1 bs=4096 iseek=1448353430 of=/dev/null
16777216+0 records in
16777216+0 records out
68719476736 bytes transferred in 804.985269 secs (85367372 bytes/sec)
#
 
Last edited:
Status
Not open for further replies.
Top