Archive for May, 2008

Virtualisation Comparison

May 13th, 2008
Until now I’d assumed that we’d probably end up using VMWare Server 1.0 or 2.0 (if it’s out of beta in time) for the virtualisation project.
The paid-for versions of VMWare have lots of additional features over and above VMWare Server 1.0, some of which are highly desirable to me, and which are provided free-gratis in some of the GPL virtualisation technologies.
  • Multiple disk snapshots
  • VM State Migration (VMotion – moving a running VM between hosts with no downtime)
  • >4GB RAM allocation per host OS
  • iSCSI initiator support within the VM technology – such that a host OS can access an iSCSI target without a software iSCSI initiator.
I thought it therefore prudent to do a bit of investigation in to the options before making a final purchasing decision.
The four main candidates are:
  • VMWare
  • VirtualBox
  • KVM
  • Xen
VMWare
Most people have heard of VMWare – they are something of the industry standard.
VirtualBox
Produced by Sun Microsystems, VirtualBox is released under the GPL, however it has a few addon modules which are paid-for (such as USB and iSCSI initiator support). VirtualBox can leverage Intel VT technology but they think their software implementation is superior.
KVM
I hadn’t come across KVM before. Written around 2006, it was recenty adopted by Ubuntu and Canonical as the official Ubuntu virtualisation technology. It relies on Intel VT technology in the processor to provide much of the functionality, failing back to QEMU when the VT instruction set stops.
Xen
Xen is pretty well known for being fast and stable. It is very thin at only 50K lines of code, and runs as much of the guest OS on the bare-metal hardware as is possible. It is very much aimed at the datacentre market and has many advanced features that are addons to other systems.
Quick Reference

VMware Server

VirtualBox

KVM

Xen

Licence

Propriatory (free)

GPL (plus closed source modules)

GPL

GPL

Sponsor

Sun Microsystems

Kumranet / Canonical

Redhat, Oracle, Sun Microsystems

Architecture

Kernel Module

Hypervisor

V Requirements

None

None

VT / AMD-V

VT / AMD-V

Host OS Support

Linux / Windows

Linux / Windows
Linux / Windows
Linux / Windows

uest OS Support

DOS, OS/2, Windows All Versions, Linux, Solaris x86

DOS, OS/2*, Windows All Versions, Linux

DOS, OS/2, Windows All Versions, Linux, Solaris x86
DOS, OS/2, Windows All Versions, Linux, Solaris x86

Live VM Migration

Y+

Y+

Y

Y

iSCSI Initiator

Y+

Y+

N

N

Speed

Fast

Fast

Fast

Near Native

Configuration

GUI

GUI / File

GUI / File

GUI /XML

* Only with Intel VT or AMD-V enabled processor
+ Additional Cost

So Much Choice!

May 9th, 2008

So after the testing it looks like iSCSI may well be the way to go. I spoke to Stuart at M-Tec about it and he didn’t seem to think that iSCSI was going to be a problem performance wise with VMWare. He’s sending over a quote for the basic paid for VMWare package with a list of the benefits over the free VMWare Server 2.0 which is in beta now – but should be in production by the time we go live with this.

Next problem is then to spec the hardware. In an ideal world I’d like to stay totally HP as their kit is reasonably priced and to date has been extreemely reliable – however they’ve discontinued the DL320s which I had thought would make an ideal iSCSI target with the addition of some RAM and either Ubuntu or OpenFiler. There is no direct replacement yet for the DL320 which means I’d have to go for a server using the 2.5 SFF drives – which are far more expensive, especially at larger capacities.

HP do a StorageWorks box that might fit the bill – the MSA2000i series enclosures. They’re the same size and density as the DL320 but work out more expensive. One would hope however that being purpose built for the job they would be faster than the DL320s.

Dell offer the AX150i which is a rebadged EMC enclosure. This is particularly tempting because EMC are a big name when it comes to SANs – and their kit ought to be up to par. I’ve not got a quote for one of these yet because the thought of Dell technical support based on past experience makes me shudder. There’s also the obsticle of actually buying one from Dell Education Sales which makes it an even more unlikely proposition!

Infortrend do an enclosure that would fit the bill nicely. I wasn’t that impressed with the build quality of the Infortrend fibre SAN I put in at BHCC though – and I understand Ian has been having hassle getting support on the unit when one of them failed recently. With that in mind I’ll only really consider Infortrend as a last resort.

The final option is to buy a server from another vendor – IBM or Dell – who do still support LFF drives in a rack configuration and proceed as planned, or source a NOS DL320s via eBay or the like.

And all that before we even consider what VM software to run. I’ve been assuming VMWare but there’s others to be considered…

iSCSI Continued

May 9th, 2008

Further to yesterday’s work with iSCSI, I plugged my iSCSI target directly in to the workstation using a crossover cable giving uncontended 1GB network between the hosts.

I then ran DISKSPEED again and here are the results:
C:\>disksped.exe 512 i:\
DISKSPEED (C) Alexander Grigoriev, alegr@aha.ru
Test File: “i:\$$test$$.tst”
Test File Size: 512 MB
Testing Uncached New File Write Speed….
Data Transfer: 23.60 MB/s, CPU Load: 11.2%
Testing Uncached Write Speed….
Data Transfer: 32.38 MB/s, CPU Load: 12.7%
Testing Uncached Read Speed….
Data Transfer: 69.73 MB/s, CPU Load: 22.0%
Testing Cached Write Speed….
Data Transfer: 36.78 MB/s, CPU Load: 15.3%
Testing Cached Read Speed….
Data Transfer:1261.09 MB/s, CPU Load:  1.5%

C:\>disksped.exe 512 i:\
DISKSPEED (C) Alexander Grigoriev, alegr@aha.ru
Test File: “i:\$$test$$.tst”
Test File Size: 512 MB
Testing Uncached New File Write Speed….
Data Transfer: 22.97 MB/s, CPU Load: -1.0%
Testing Uncached Write Speed….
Data Transfer: 31.27 MB/s, CPU Load: -1.1%
Testing Uncached Read Speed….
Data Transfer: 69.57 MB/s, CPU Load: 19.8%
Testing Cached Write Speed….
Data Transfer: 36.49 MB/s, CPU Load:  2.1%
Testing Cached Read Speed….
Data Transfer:1213.28 MB/s, CPU Load:  1.5%

It seems therefore that iSCSI over 1Gb LAN is capable of keeping up with direct attached SATA – and in some instances such as Cached Reads able to significantly out perform it.

There was also a significant increase in CPU load on the initiator when running at gigabit speeds – but still in line with UDMA utilization so I’m not too concerned.

The target saw its load average reach 1 during both 100Mb and 1Gb tests, however at virtually 0 CPU loading indicating the load average is being affected by time the CPU is IO bound waiting for disks. Certainly in the case of the 1Gb ethernet, it looks like we have hit the limit of the SATA disks or controller rather than the LAN.

It would be interesting to run the tests again on an iSCSI target using SCSI U320 drives to see where the limit comes there.

It seems HP have discontinued the DL320s quite recently so I’m stumped now as to what would be the best disk platform to use. Perhaps something from IBM or Dell would fit the bill better? I shudder at the thought of Dell support though!

iSCSI

May 9th, 2008

I was surprised how easy it was to get all this set up. I broadly followed the instructions here.

Firstly I installed Ubuntu Hardy 8.04 LTS server on one of the PCs. It already has iSCSI support built in to the kernel. During the setup I made a / partition, some swap and an LVM partition which I then split in to 1 volume group (vg1) with two logical volumes in it (homes and vmware).

Next I needed to install the services for iSCSI – apt-get install iscsitarget iscsitarget-source – and then edit the config file to share a block device.

I exported /dev/vg1/homes and /dev/vg1/vmware as iqn.2008-05.uk.sch.brighton-hove.longhill:homes.iscsi and iqn.2008-05.uk.sch.brighton-hove.longhill:vmware.iscsi respectively.

Next I downloaded and installed the Windows XP iSCSI initiator from Microsoft’s website. I installed it and within 30 seconds I’d got the VMware block device mounted and formatting as NTFS. At this point, the PC I’m using is my normal desktop PC and is separated from the iSCSI target by our normal LAN – the bottleneck being the 100MB link to the desktops at both ends. That said, for normal fileIO operations it seems to be acceptably fast – the only time it’s really shown up is on big transfers.

I downloaded a copy of DISKSPEED from their website and ran it as follows:

C:\Archives>disksped.exe 512 i:\
DISKSPEED (C) Alexander Grigoriev, alegr@aha.ru
Test File: “i:\$$test$$.tst”
Test File Size: 512 MB
Testing Uncached New File Write Speed….
Data Transfer:  4.16 MB/s, CPU Load: 95.9%
Testing Uncached Write Speed….
Data Transfer:  7.76 MB/s, CPU Load:  3.7%
Testing Uncached Read Speed….
Data Transfer:  9.90 MB/s, CPU Load:  3.7%
Testing Cached Write Speed….
Data Transfer:  8.34 MB/s, CPU Load:  3.1%
Testing Cached Read Speed….
Data Transfer:  9.96 MB/s, CPU Load:  4.9%

The options there create a 512MB file on I:\ (where I have the VMware iSCSI target mounted). I was surprised how high the CPU load was on initial creation so I ran it again:

C:\Archives>disksped.exe 512 i:\
DISKSPEED (C) Alexander Grigoriev, alegr@aha.ru
Test File: “i:\$$test$$.tst”
Test File Size: 512 MB
Testing Uncached New File Write Speed….
Data Transfer:  6.20 MB/s, CPU Load:  3.2%
Testing Uncached Write Speed….
Data Transfer:  7.79 MB/s, CPU Load:  4.1%
Testing Uncached Read Speed….
Data Transfer:  9.92 MB/s, CPU Load:  3.7%
Testing Cached Write Speed….
Data Transfer:  8.38 MB/s, CPU Load:  3.9%
Testing Cached Read Speed….
Data Transfer:  9.93 MB/s, CPU Load:  4.3%

This seems a lot more consistent – perhaps there was some kind of background task going on when I did the initial test. The DISKSPEED website recons UDMA transfers use up to 10% CPU time and that anything less than that is pretty good. I was worried we might see more like 40% CPU load similar in performance to PIO mode.

For comparison, I ran the same test on the C: drive of my desktop PC (which is identical in specification to the iSCSI target PC).

C:\Archives>disksped.exe 512 c:\
DISKSPEED (C) Alexander Grigoriev, alegr@aha.ru
Test File: “c:\$$test$$.tst”
Test File Size: 512 MB
Testing Uncached New File Write Speed….
Data Transfer: 22.16 MB/s, CPU Load:  2.2%
Testing Uncached Write Speed….
Data Transfer: 29.71 MB/s, CPU Load:  0.9%
Testing Uncached Read Speed….
Data Transfer: 30.92 MB/s, CPU Load:  0.5%
Testing Cached Write Speed….
Data Transfer: 21.17 MB/s, CPU Load:  2.2%
Testing Cached Read Speed….
Data Transfer: 30.15 MB/s, CPU Load:  5.0%

So clearly writing to local disk is less CPU intensive – but interestingly we only see about a 50-60% drop in performance with iSCSI – bearing in mind we’re using a 100Mb LAN and that traffic is being routed between subnets I don’t think that’s an unreasonable performance hit.

Next I’ll try linking the two PCs with a crossover cable at 1000Mb with jumbo frames enabled and see what the difference is.

I also found OpenFiler which is a small rPath Linux distro designed specifically to serve files – and has the ability to serve as an iSCSI target. This may work out to be the best way of doing this on a long term basis.

Virtualization

May 9th, 2008

One of the things we’ve been thinking about of late is that we really ought to retire some of our crustiest servers – they are little more than 6 year old PCs in rack cases – and farm their workload on to VMWare versions of themselves. Some run completely bespoke software and others have very specific dependencies in the environment that would be a nightmare to attempt to recreate.

I’m looking therefore at what hardware to host this new system on. I saw a very impressive demo of HP SAN kit at Eastbourne College with ESX server and VMotion – but we don’t have the budget for that kind of kit, so it’s going to have to be something semi-home-brew.

First I looked at getting a DAS (Direct Attached Storage) box like the MSA60 or MSA70 from HP – they work out about £1700.00 plus drives – however for the performance I think it’s going to need we’re looking at using LFF SAS drives in an MSA60 which run at £170.00 each for 146GB and £300.00 each for 300GB meaning we can only really afford about 500GB of useable storage at RAID5.

To go with that, we’d be looking at a Proliant DL360-G5 with twin quad core 3GHz Xeon chips to give us our first eight-way server! We’d have to scrimp on the RAM thought at about 4GB but that’s something we can address easily at a later date. It would need a P800 Smart Array controller fitted to it to connect to the MSA60. Total cost with 500GB SAS storage comes out just over £5000.00 + vat.

Not bad, but it doesn’t scale well. Sure I can add more disc easily enough but the limit will come when that 8 way box runs out of puff – with DAS there’s no way to share that storage with a second server and leverage the investment in the fast drives etc. It also makes some kind of failover configuration much harder than in a SAN environment.

I read an interesting article here suggesting using an HP DL320s storage server (which are approximately the same cost as an MSA60) as an iSCSI target and therefore the basis of an iSCSI SAN. This is a neat solution as it scales in every way – the DL320 with a P800 Smart Array controller fitted can daisychain additional storage chassis from it (eg an MSA60) and since it gives our proposed frontend DL360 access via iSCSI, we should be able to access those enclosures and drives from multiple servers via a dedicated copper gigabit network and a clustered filesystem like GFS or OCFS2.

My main concern there though is throughput to disk. Last time I played with Fibre Channel SANs at BHCC they were 2GB in each direction – 4GB total throughput per link – whereas my DL360 will only have 1 available gigabit copper interface free giving a maximum of 1GB less protocol overheads in any given direction.

I’m therefore going to do a bit of testing on iSCSI with two of our standard desktop PCs – Intel Core2Duo machines running at 1.86GHz with 4MB cache. Onboard Intel gigabit NICs.

Universal Imaging

May 8th, 2008

We’ve had a universal Windows XP image now for about 12 months at work and it has served us well – but with the release of XP SP3 it’s time to update the image and throw in another 12 months development from DriverPacks for good measure.

As ever it hasn’t been as simple as I’d have liked.

Last year I gave up trying to integrate the DriverPacks MassStorage pack in to the Sysprep.inf file and relied solely on the WindowsXP drivers to get us up and running – the drivers from the DP roll themselves in after the first boot automatically. That was fine on all our desktops etc but I found a few laptops that would not take the image – most notibly James’s old Sony VAIO. I suppose I should have seen it coming – being a Sony – but I’d love the image to roll painlessly on to that too, proving itself truely universal in the process!

I’ve therefore reverted a couple of SVN changes from last year and now have a machine building a new image using the M pack from DP. Initially it seemed to go OK but I then had to press “c” to accept unsigned drivers over 8000 times – yes, really, 8000 – before sysprep stopped prompting and went in to deep thought. It was still in deep thought when I left tonight so no progress report until tomorrow!

I think in the long run it’s going to be easier to take a selection of HWIDs from the M pack to roll in as required and introduce a manual step in the reseal.bat process – something I’ve manged to avoid to date – grrr!