This week has seen a number of announcements relating to cloud computing from leading vendors in the field such as Red Hat, Canonical and the OpenStack project. And one subject that kept cropping up each time was Containers.
Containers have been around for some time, but seem to be garnering a lot more attention lately. The basic concept is to partition off some of a server's resources in order to create an isolated instance, somewhat akin to a virtual machine.
Unlike virtualisation, a number of Containers rely on the same host operating system for services. Containers can be thought of as separate instances of whichever operating system is installed on the host server.
This Containers approach has the advantage that it is less demanding of system resources, and therefore allows for a higher density of instances than is possible with virtual machines. In the cloud and in many corporate data centres, density has become a major issue, as firms struggle with rising energy costs and the need to cram more and more capacity into the same space.
For this reason, Parallels has been offering its Virtuozzo Containers platform for many years, allowing hosting firms to carve up Windows or Linux servers into many more instances that can be made available to customers than would be possible with the hypervisor approach.
However, the downside of Containers is that they are not as flexible as virtual machines. With the LXC technology in the Linux kernel, for example, you are obviously restricted to running Linux applications inside a Container.
For this reason, Containers are unlikely to replace virtual machines, but they are already being offered alongside them by some cloud providers, notably ElasticHosts, which launched its Elastic Containers service earlier this month.
Some Linux developers such as Red Hat and the Docker project are working with the Containers technology to enable greater portability of applications between clouds. Because a Container may have little more than the application inside it, it is potentially much more feasible to migrate from an on-premise private cloud out to a public cloud and back again than is the case with virtual machine images.
Meanwhile, some workloads, such as Hadoop, are best delivered by provisioning the necessary resources onto bare metal. For this reason, the latest Icehouse release of the OpenStack cloud platform showcases a module called Ironic to support this. Ironic is still in "incubation" in this release, but will be polished over time as development proceeds.
All these threads seem to indicate that the successful cloud platforms of the future will be more diverse than at present, able to support a variety of technologies best suited for differing workloads and use cases.
And that brings to mind a conversation I once had with one of the founders of Parallels, who said that virtualisation was basically a dead end. It only exists, he said, because we haven't yet figured out how to create applications that can scale elegantly to make use of a large number of processors. Perhaps when we figure that out, cloud computing really will take off.
Daniel Robinson is technology editor at V3, and has been working as a technology journalist for over two decades. Dan has served on a number of publications including PC Direct and enterprise news publication IT Week. Areas of coverage include desktops, laptops, smartphones, enterprise mobility, storage, networks, servers, microprocessors, virtualisation and cloud computing.