Wednesday, April 17, 2013

ZFS

Very short article on brief ZFS testing. Initially I was running a Debian VM presenting a single lun as iSCSI target to my test host. Running on an ageing laptop the performance was not very good naturally. I then had a vision. What if I could stripe the traffic over multiple devices?

I have two fairly new USB drives lying around here, that would do the trick. I created two additional virtual disks, each on one of the USB drives, attached them to the Debian VM, setup ZFS and created a RAID0 pool striping across all three drives. A first dd gave me some promising results, I was writing zeroes at roughly 130MB/s. I'm not too familiar with ZFS and didn't want to waste any time reading a lot, so I just created a 400GB file using the above mentioned dd-method and exported it to my host as a iSCSI lun. Performance in the VMs however was not very good, IOMeter was seeing 80 to max 200 iops (80% sequential at 4kb). For a comparison I created a 1GB ramdisk and attached it to my test-vm as raw device. There I would see >3000 iops consistently. More so with the above described ZFS setup I would have serious issues when trying to power on a few VMs. The vCenter client would time out a lot as VCVA wouldn't be able to handle its basic functions. At one point I saw a Load of 40 inside the VCVA, while I was installing a VM from an ISO, nothing overly complicated.

Even with a hugely underpowered lab like mine I figured there is still quite some tuning I could do. However time being an issue I opted for a preconfigured NAS appliance. A quick look at Google made it pretty clear, I would use Nexenta Community Edition. I figured ZFS coming from Solaris, I would better use a Solaris based appliance rather than FreeNAS (for which my heart beats though as it's based on FreeBSD). So far what I'm seeing looks promising. I configured the ZFS pool and iSCSI target slightly different from my earlier deployment, using three virtual disks spread across my two USB drives and one internal HD at 100GB, created a pool and thus far only exported two 100GB luns out of the iSCSI target.

On the VMware side I created a SDRS cluster. No other reason than "Because I can".


Given the very low specs of my lab I'm quite happy with the results. IOMeter specs are 80% random IO at 4kb, 2/3 reading, 1/3 writing. At 100% sequential read is pretty consistent around 800 iops it peaks at a little over 900 iops. Heavy cache usage can be seen at 100% sequential writes, where IOmeter just bounces around from 13 to close to 2000 iops.

Unfortunately this setup is still not quite capable of handling more than one VM, especially when it comes to swapping.

Lab specs:

T61 laptop with C2D T8300 @ 2.4GHZ CPU, 3GB Ram, Nexenta CE VM has been assigned with 1.5GB (nowhere near enough for proper ZFS testing), internal drive WD1600BEVS-08RST2, externals are WD Elements 1023 and 10A2.
It runs Win7Pro64bit, VMware Workstation 9, there are no VMware tools installed in the NAS appliance (I assume that might kick things up a little more).

T400 running ESXi straight from a thumb drive, C2D P8700 @ 2.53GHZ, 8GB RAM.

Both laptops are connected directly via cross ethernet cable, the link's bandwidth is 1GBit/s.

No comments:

Post a Comment