Server virtualisation is happening, and Quocirca's research shows that the growth in up-take is accelerating, albeit from a low starting point. However, there is a bit of a problem – virtualisation technologies are completely non-standardised, and a hypervisor from one vendor is not compatible with those of others.
While the main, nay only real player on the block was VMware, then there was no problem. A customer made the decision to buy VMware, took its tools and got on with it. The fact that the abstraction layer was dependent on proprietary technology was neither here nor there.
However, we now have new kids on the block, and some older ones with renewed vigour. Probably the biggest new kid will be Microsoft, whose Hyper-V will soon start shipping as a means of virtualising Windows Server 2008. Citrix is pushing its offerings, based on its long-standing understanding of desktop virtualisation along with its recent purchase of Xen as a strong base platform going forwards.
HP is also bringing its Unix-based virtualisation capabilities to the fore in the guise of VSE, providing the capability for its Integrity and other Intel-architected systems to be virtualised.
Underneath this, we cannot write off the work that Intel has done with its own silicon virtualisation capabilities (known as VT), and what AMD is doing with AMD-V. Although currently more a means of providing advanced functionality for other vendor's hypervisors, we can expect the functionally to continue to improve and grow as time goes on.
Also, there's IBM with its own advanced virtualisation capabilities built in to the Power range of CPUs, and its own long-term knowledge of what virtualisation is all about based on its mainframe and mid-range server capabilities, backed up with its own software hypervisor capabilities.
Then there's Sun, with its own system – Logical Domains – at the server side. It has also bought Innotek, whose VirtualBox technology provides desktop and server virtualisation for multiple different host operating systems.
We can also look to smaller companies and what they are up to – Parallels has come to a degree of prominence with what it could do on MacOS/OS X systems, and this is driving adoption of its Windows equivalent. Real Time Systems has the RTS Hypervisor, Green Hills Systems has the Integrity Padded Cell.
What it all points to is the likelihood that an organisation will end up with a heterogeneous virtualised environment, with two or more main virtualisation technologies creating issues for management, provisioning and auditing of the environment.
One of the main needs here will be for image management. A function or application that is needed has to be provisioned into the virtualised environment. The best way of doing this is from virtual images. Unfortunately, a virtual image saved on a VMware platform (often known as a virtual appliance) cannot be easily deployed under a different virtualised environment, as the images are dependent on the proprietary form of the specific virtualisation engine.
One company that was looking to make this a problem of the past was PlateSpin, now swallowed by Novell. PlateSpin provides virtual image management, and was bringing to market the capability to carry out on-the-fly virtual-to-virtual (V2V) conversions from one format to another. This not only makes it easier to provision the function, service or application that is required at any one moment, but also eases image management itself. For example, an application image will need patching or upgrading at intervals. Having just one image that can be provisioned to multiple virtualised environments will be far more manageable than having to patch multiple images, one for each environment.
At the moment, the jury is still out as to how Novell plans to play the PlateSpin card it now has in its hand. The majority of other players have a vested interest in keeping virtualisation proprietary, and Quocirca does not expect to see those who stand to gain a lot of their revenues through the sale of their own hypervisor, or who believe that they can take the big guys on directly, putting in great efforts to ensure full interoperability with other vendors' systems.
Neither do we see any strong moves towards standardisation in the manner in which hypervisors or other virtualisation technologies will work – each vendor is ploughing its own furrow, not looking to either side, intent on producing the next great mousetrap (to use several mixed metaphors).
So, does virtualisation face the problem of strangling itself before it really takes off? Highly doubtful – virtualisation has too many things going for it for it to fail in this way. Organisations are well along the road to understanding how virtualisation can help them in optimising utilisation levels of their hardware assets, in lowering power and cooling requirements and in providing a more flexible platform for the business to work against. But, if the use of multiple abstraction technologies means that we still end up with different islands of virtualised resourced, have we really moved on far enough?
Maybe what is required is for a company to come to the fore with a 'supravisor' – a way of providing a high-speed, ultra-transparent means of abstracting the abstraction layer, giving a fully standardised platform under which different hypervisors can operate.
This supravisor need not be massively intelligent itself – it may turn out all that is required is a means of carrying out fast V2V image translations and ensuring that a management console understands what the underlying environment is before provisioning.
It may be something more – something that means that a single image can be used directly on top of a standardised layer. It may be that as time goes on, a supravisor subsumes the existing virtualisation technologies already in use. The ones that could do this are the systems management vendors – the likes of IBM Tivoli, CA, HP and BMCs and Microsoft – but will they?
All that is certain is that organisations need to ensure that they have the capabilities to choose their virtualisation direction as they see fit, not feeling that they are tied down to a specific route where a decision made in 2008 may prove to be wrong in 2010. A one-solution decision may be OK for now, but the market for virtualisation tools is poised to explode in the coming months. Looking around for those who have at least a vision of where multi-virtualisation management will be going may well pay dividends further down the line.
Why does Facebook store "my entire call history with my partner's mum", asks developer who requested his Facebook data
Facebook database included text-message metadata - despite not using Facebook Messenger for SMS
Before Ocado could start selling the technology it had developed to other retailers, it had to tear down and rebuild its own monolithic architecture
Successful attack could result in harm to patients and financial loss, warns NHS governing body
Guccifer 2.0 claimed to be a lone Romanian hacker - until a schoolboy error gave him, her or them away