Containers have been one of the big themes in the world of IT this year, with virtually every cloud and infrastructure vendor rushing to add support for container technology to their platform or service.
But some in the industry are predicting they will soon displace virtualisation as the preferred way of operating workloads.
While virtual machines have been in use for years, and regarded as a relatively mature technology, containers seemed to come from nowhere within the past few years, although they have actually been around for much longer.
Containers have been getting attention for several reasons; they are more lightweight than virtual machines, so you can operate more of them from a single host server; and they are easier to provision than spinning up a new virtual machine instance. On the down side, containers lack some key capabilities of virtual machines, such as secure isolation between instances on the same server.
Many of the gaps between virtualisation and containers are being addressed, at least in the Linux world, such that there will be few good reasons to use virtualisation within a couple of years, according to Richard Davies, chief executive of cloud service provider ElasticHosts, which operates both cloud servers and a container-based service called Springs.io.
"Linux containers are a kernel technology, and things like Docker are just a tool for using them, but the underlying technology is still maturing," Davies (pictured) said in an interview with V3.
"We see the underlying kernel technology continuing to evolve, and continue to get closer and closer to providing the same full functionality you would get in a VM, and I think in another two years we will have reached that point," he added.
The key features of the Linux kernel that have been used to support containers are namespaces, which allow for isolation of processes running on the same kernel, and cgroups (control groups), used to impose limits for resources such as CPU, memory and disk that a process or group of processes can use.
"At the kernel level, a container is basically a combination of namespace rules that restrict the processes in the container so they don't have any visibility outside their namespace, and a set of cgroup rules that quota how much CPU, how much RAM, how much disk I/O a container can use from the system resources," Davies explained.
To a piece of software, running inside a container is pretty much the same as running inside a virtual machine, but currently there are restrictions because of the way containers effectively share the underlying host operating system.
"If you want to mount network file systems, you haven't historically been able to do that inside a container. If you want to load kernel modules in order to support something like hardware encryption, you haven't been able to do that inside a Linux container, and so containers have been suitable for 80 to 90 percent of all Linux workloads, but not all the edge cases," Davies said.
"What's happening at the low level with the Linux kernel teams is they are going round and filling in these capability gaps, so the behaviour becomes closer and closer to that of physical hardware or a virtual machine, where you have total control and can do everything," he added.
This means that for Linux at least, containers are gradually becoming a complete replacement for virtual machines with each kernel update, and soon there will be no reason to choose virtual machines over containers.
Among the reasons for this is that a container need not be much larger than the application it contains, Davies explained, whereas you typically need to allocate a gigabyte of memory just to boot up a virtual machine, and that gigabyte remains allocated and you have to pay for it, regardless of how much of it you are using and even if the virtual machine sits there doing nothing.
With this in mind, Davies believes that containers will replace virtual machines, at least in the Linux sphere.
However, while Linux represents the lion's share of the service provider market, it isn't the only game in town, and this highlights a shortcoming of the container approach; they are tied to a specific operating system, whereas each virtual machine contains its own operating system, which allows Linux and Windows workloads to run side by side on the same cloud, or even the same server.
While containers are taking off in the Linux arena, virtual machines still rule in the Windows world, possibly because of VMware's early success as a server consolidation platform in the data centre, but also because container support has largely been lacking on Windows servers, with the exception of the Virtuozzo platform that Parallels supports for service providers.
That could change in the future now that Microsoft is set to introduce Windows Server Containers in Windows Server 2016 and also on its Azure platform next year. But it is early days for this technology, and thus difficult to say whether Microsoft will be able to support Windows Server Containers with all the capabilities that would also make them a direct replacement for running workloads inside virtual machines.
As for Elastichosts, the firm says that its Springs.io service offers more of a true container service than some of the major providers, as it runs containers natively on the host server rather than inside a virtual machine.
"If you look at Amazon's Docker service or Google's Docker service, they're not actually using Linux Containers as the isolation mechanism; they're running virtual machines and they are then running Docker inside those," claimed Davies.
"Because there are no VMs involved on our platform, we can do on-demand scaling of running instances, so instead of customers having to stop and start instances to get extra capacity, they can scale an instance up or down automatically as it runs, based on the capacity utilisation inside it," he said.
And with Linux accounting for at least 70 percent of the service provider market, we could soon see cloud-hosted workloads dominated by similar container-based platforms to that operated by Elastichosts.
Intel's neural network USB stick could bring AI to the masses
Dubbed Barnard's star B, newly discovered planet is believed to be rocky
Also, what's a USB stick?
Gravitational waves become extremely weak by the time they reach the Earth and require highly sensitive equipment for detection