As you may know, server virtualisation has great advantages. In an era of ever bigger data, it’s one way to cope, adding instant scalability and flexibility. Yet, unfortunately, the ‘Virtualisation Cycle’ also becomes something of a ‘Vicious Circle.’ It goes something like this…
Virtualisation means you save money on servers. Yet it also creates a massive – and unexpected – surge in I/O. Which causes bottlenecks in your storage layers. So you need more capacity. And you either put up with the bottlenecks. Or you spend the money you’ve previously saved on servers on more storage. Or you add more VMs – virtual machines. Which increases VM density at your servers. Which increases I/O still further. And creates yet more demand for bandwidth. Which costs you yet more of the money you’re supposed to have saved!
It sounds a bit like an IT version of that old chestnut ‘There’s a hole in my bucket dear Liza’. And it means you can’t realise the full benefits of virtualisation - unless you break the circle and remove the bottleneck. And the way to do that economically is not to add more expensive storage, whose performance has not kept pace with other areas, but to optimise bandwidth at source. The virtual way.
Q. How to dig yourselves out of the server virtualisation hole?
A. More virtualisation!
Old Labour bruiser Dennis Healey once famously said to a struggling fellow politician ‘When in a hole, stop digging.’ I’m going to say the opposite - sort of. Virtualisation may have got you into this mess, but it can also get you out. Essentially, you need virtualised I/O to meet the levels of demand you’re being faced with cost effectively.
You can either deploy this at host level, between storage and network, or in your infrastructure. Either way, the principle is to share I/O capability across all your VMs rather than the one server, one application, one I/O arrangement that used to hold. Each VM competes for I/O resources, with new ethernet or fibre channel technology making sure the bandwidth on the host is up to the job.
The trick now becomes, how to make sure the right VMs get the right amount of bandwidth at the right time? As virtualisation moves forward to conquer the more critical applications in the data centre, this becomes ever more important.
The options mentioned above are virtualising I/O at network adaptor and infrastructure level. With the former, you need fewer high speed adapters for a larger number of machines. Adapters can be divided into multiple virtual adapters, or have bandwidth allocated to pre-determined groups of VMs that need guaranteed performance levels. As well as allowing greater VM density in the first place, this also means that CPU resources are fed back to the host to support the extra density. A virtual switch on the card can also help reduce network traffic flow out of the server, improving VM and network performance still more.
As stated, you can also go for virtualised I/O at infrastructure level, either by virtualising the switch infrastructure or by adding a virtual I/O gateway. The latter can share a single interface card across multiple servers, resulting in better connectivity and resource optimisation - and less need to upgrade in future. Your choice depends really on your short and long term goals.
Whichever options you take, however, virtualised I/O should bring you much greater performance and flexibility and allow you to keep pace with the demands of your server infrastructure without adding yet more hardware. You can virtualise your mission critical applications with confidence, enjoy better ROI on server virtualisation – and turn that vicious circle into a virtuous cycle.
For more advice on server virtualisation issues, call King of Servers on 0845 611 8696 or email sales@kingofservers.com.
Read More