Wednesday 13 June 2012

Ubuntu NAS - ZFS on the HP Microserver

HP Microserver N40L
HP Microserver N40L
I had loads of fun setting up FreeNAS on a HP Microserver and can highly recommend this combination for a small home/office file server. But I had a few problems mainly with the performance and stability of the iScsi driver that comes with it. So I decided to pull out the FreeNAS/freebsd root disk and put in a new one on which to check out Ubuntu 12.04 server in combination with the native linux ZFS filesystem.

My target was to get 1Gb/s read and write performance out of the little microserver - ideally iScsi, but NFS would be OK too. Amazingly although the former appears to be a little optimistic the latter is acheivable.

I had setup FreeNAS with 3x2TB disks in a RAIDZ1 array. I replaced the root disk with a new one leaving the 3 ZFS disks in situ and installed ubuntu 12.04 server on the fresh root disk. I created a 'nas' user account for the box and got the installer to put SSH on for me, but nothing else.

ZFS installation is straightforward - i installed bonnie++ too so I could do some benchmarks on the box.

# apt-get update 
# apt-get upgrade 
# apt-get install python-software-properties 
# apt-add-repository ppa:zfs-native/stable 
# apt-get update 
# apt-get install ubuntu-zfs nfs-server sysstat iscsitarget iscsitarget-dkms open-iscsi bonnie
Once installed mounting the existing ZFS array was a case of doing

 # zfs import -f nasvol
and /nasvol appeared by magic on the root of the filesystem.

I then set about benchmarking performance of ZFS, ext4 (the root filesystem) NFS, iScsi using bonnie++. The fastest combination turned out to be a 4 drive RAID10 ZFS setup with NFS. I couldn't get iScsi to perform without large latency, i/o coming in bursts or high CPU. NFS on the other hand was pretty sweet - locally managing 180MB/s write and 250MB/s read. NFS speeds top out at 96MB/s write and 105MB/s read.

To get this performance, a few tips:

  1. ZFS performance is woeful (40MB/s) unless you create the zpool with '-o ashift=12'. This makes it use 4k blocks (i think) which coincides with the modern SATA drives.
  2. More spindles, more speed. Raid 10 gives it 4 to read from and 2 to spread the writes accross. 
  3. RAIDZ1 is great, but not with the Microserver's horsepower. Get an i5/i7 and you'll be able to enjoy deduplication too!
  4. NFS server set up rw, async, no_subtree_check, wdelay. (The UPS is on order)
  5. NFS client set up to use rsize/wsize 65536, tcp, and noatime.
  6. It's possible to disable checksumming in ZFS, but it didn't make much different in this setup.
As I mentioned above, iScsi performance didn't really compare too well with NFS. My ZFS zVol iScsi target managed 70MB/s writes and 50MB/s reads. None of the combinations of LVM2/ext4/iScsi I tried managed more than 80MB/s write although some did manage 100MB/s read.






Ach! NFSv4 is mungling my UIDs

Just noticed my NFS mounts had horribly long UID/GIDs which look suspicioiusly like 2^32 -1 (or close to that). It turns out that with nfs4 idmapd is used to map uids to/from the server. For my simple purposes my first instinct was how do I turn this off! But it appears that one source of it's confusion is just the the domain name it is configured with.

Changing the idmapd.conf file to set the domain to be the same at both ends did the trick - now the IDs are as expected on client and server.

**update** Only on the 12.04 server. On my 11.10 server the IDs were still mungled, and worse I couldn't restart idmapd using service restart. Manually starting rpc.idmapd didn't work either. I rested to:

# apt-get remove nfs-common libnfsidmap2
# apt-get purge nfs-common libnfsidmap2
# apt-get install nfs-common libnfsidmap2

Which after putting the correct domain back in /etc/idmapd.conf did the trick.


Friday 8 June 2012

Stop Ubuntu 12.04 NetworkManager meddling!

Annoying Notifications!
After adding a couple of extra NICs to my work Ubuntu 12.04 desktop PC - part of an ongoing work with our office NAS/iScsi server - the network icon was blinking away trying to do something useful with the new cards.  Mildly annoying when in the office. Horrid when accessing remotely as the notifications eat lots of bandwidth. At the same time it was generating a fair amount of guff in /var/log/syslog.

Easy, I thought - i'll go fix them in /etc/network/interfaces... ...hang on:
auto lo
iface lo inet loopback
where's eth0 gone?! It seems that by default all is now managed by the network manager - just what I want most of the time especially on my notebook. But not what I want right now. Should I disable it?

root@welbeck:/run# service network-manager stop
network-manager stop/waiting
root@welbeck:/run# ifconfig
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:118 errors:0 dropped:0 overruns:0 frame:0
          TX packets:118 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:8230 (8.2 KB)  TX bytes:8230 (8.2 KB)
Eth0 has gone! Not quite what I had in mind. A quick dig into the manpage for NetworkManager.conf (in /etc/NetworkManager) though has this:

      managed=false | true
              Controls whether interfaces listed in the 'interfaces' file  are
              managed  by  NetworkManager.   If  set  to true, then interfaces
              listed in /etc/network/interfaces are managed by NetworkManager.
              If  set  to  false,  then  any  interface  listed  in  /etc/net‐
              work/interfaces will be ignored by NetworkManager. Remember that
              NetworkManager controls the default route, so because the inter‐
              face is ignored, NetworkManager may assign the default route  to
              some  other  interface.  When the option is missing, false value
              is taken as default.
Sounds like it does Just the right thing(TM) then. So quick hack of /etc/network/interfaces
auto loiface lo inet loopback
auto eth0iface eth0 inet dhcp
auto eth1iface eth1 inet manual
auto eth2iface eth2 inet manual
And a service network-manager stop/start... 


And it's wound it's neck in.






Tuesday 5 June 2012

Host a VirtualBox VM on a FreeNAS iScsi target

Setup Freenas iScsi Target

Pre-requisites

  1. A FreeNAS server with a ZFS filesystem. I have Freenas 8.04p2 with RAIDZ1 accross 3x2TB disks
  2. Another computer on the same network with Virtual Box installed - in my case an ubuntu 11.04 workstation (sudo apt-get install virtualbox virtualbox-guest-additions virtualbox-guest-additions-iso)

Create ZFS Volume

Easy with freenas, just needs a name and a size. My case - 'coppervmroot' and 30gb.

Set up iScsi

  1. Create a 'portal' - the default 0.0.0.0:3260 is fine.
  2. Create an 'authorized initiator' using the defaults (all/all)
  3. Create a target. name: coppervmroot, type 'disk', portal group '1', initiator group '1', auth method 'none.
  4. Create a device extent. coppervmrootextent refering to the coppervmroot ZVOL
  5. Create an 'associated target', in this case the coppervmrootextent and the coppervmroot target
  6. Turn on iscsi !

Testing from Ubuntu

Get the iscsi tools:
sudo apt-get install open-iscsi open-iscsi-utils
 See what volumes are exported by (as root)

# iscsiadm -m discovery -t sendtargets -p 10.xx.xx.xx:3260
10.xx.xx.xx:3260,1 iqn.2002-02.jfdi.org.tgt:coppervmroot

Try mounting a the iScsi volume as a local block device
# iscsiadm -m node --targetname iqn.2002-02.jfdi.org.tgt:coppervmroot --portal 10.xx.xx.xx:3260 --login
Logging in to [iface: default, target: iqn.2002-02.jfdi.org.tgt:coppervmroot, portal: 10.xx.xx.xx,3260]
Login to [iface: default, target: iqn.2002-02.jfdi.org.tgt:coppervmroot, portal: 10.xx.xx.xx,3260]: successful
And you should see a device appear in /var/log/syslog:
# tail -f /var/log/syslog
Jun  5 ... sd 4:0:0:0: [sdc] Attached SCSI disk
OK, so unmount it:
# iscsiadm -m node --targetname iqn.2002-02.jfdi.org.tgt:coppervmroot --portal 10.xx.xx.xx:3260 --logout
Logging out of session [sid: 1, target: iqn.2002-02.jfdi.org.tgt:coppervmroot, portal: 10.xx.xx.xx,3260]
Logout of [sid: 1, target: iqn.2002-02.jfdi.org.tgt:coppervmroot, portal: 10.xx.xx.xx,3260]: successful

Setup Virtual Box VM with iScsi target for HDD

You can't (yet!) manage an iscsi volume from the Virtual Box GUI. But you can do it with the vboxmanage command:
vboxmanage storageattach copper --storagectl sata0 --port 1 --device 0 --type hdd --medium iscsi --server 10.xx.xx.xx --target iqn.2002-02.jfdi.org.tgt:coppervmroot 
That's it. Boot off a CD and install onto the iScsi HDD.

N.B. during installation my VM disconnected a couple of times. Resuming the VM (which stops automatically when this happens) seemed to do the trick.