The OpenStack project is set to release the latest incarnation of its cloud computing framework this week, with a focus on management and scalability, but also new features such as official support to deploy Hadoop on OpenStack for the first time.
Codenamed Juno, the tenth release of the open-source cloud software is officially due on 16 October.
It comes as the platform is gathering momentum, with industry giant HP putting its full support behind OpenStack for its cloud strategy, while even VMware has made available an OpenStack build optimised to operate with its hypervisor and other software components.
In an interview with V3, OpenStack Foundation executive director Jonathan Bryce and chief operating officer Mark Collier disclosed the progress that OpenStack has been making, some of the key features coming up in the Juno release, and where the platform is heading.
According to Bryce, there has been a real ramp up in the number of production deployments of OpenStack in use over the past year, in contrast to previous years when it was often being operated in a pilot deployment while customers tested it and metaphorically kicked the tyres.
Meanwhile, the Juno release will see some key new features, but a lot of the updates are focused on delivering management tools and greater operational stability, he said.
"From a feature standpoint, it's pretty well-baked since Icehouse so there are a lot of things in this release that make it easier to manage and upgrade and to scale-out the OpenStack environment you are running," he explained.
Perhaps the most interesting new feature in Juno is the data processing service, known as Sahara (formerly Savannah), which is set to enable users to deploy Hadoop on OpenStack infrastructure to operate big data processing tasks.
"This will be the first time that the data processing project is actually included in the OpenStack release," said Bryce.
"It's a service that allows you to create and manage Hadoop environments and scale them out across an OpenStack cloud.
"So you can basically spin up a small Hadoop environment, add nodes to it, increase the capacity of it, schedule jobs and track those jobs, and do all of that through OpenStack APIs."
This fits in neatly with the growing interest in big data and analytics in business, and Hadoop in particular, as Bryce pointed out.
"If you look at net new IT spend right now, one of the areas that has been growing a lot is around data analytics and big data, so that's been a pretty successful use case for OpenStack, and now it's built-in as a feature of the system," he said.
Meanwhile, another big area of opportunity for OpenStack is in the networking and communications industry, according to Collier, where network function virtualisation is being used to provide greater operational flexibility and reduce the cost of the infrastructure needed to support key services.
"The telco space is one of the newest use cases emerging, where it's not so much that they are building a cloud in the traditional sense of a public cloud, instead what they are trying to do is virtualise these core functions of their whole network as they try to modernise their infrastructure," he explained.
Traditionally, telecoms firms have relied on very expensive equipment that has been designed to deliver a single, fixed function, and changing the capabilities or services offered by a network operator typically required a fork lift upgrade of the hardware.
"That model makes it very difficult to make upgrades. So they are realising that if they can take commodity server hardware off the shelf and move to more of a software model, they can then push out new features, in terms of how the network operates and around quality of service. It means they can make services dramatically cheaper, move faster, and push out new standards sooner," Collier said.
However, for this to happen, some work still needs to be done to components such as the Nova compute module and the underlying hypervisor, (typically the Linux KVM) in order to deliver the real-time response required for many telecoms services.
"You need to guarantee that a packet can get somewhere by a certain time, not even milliseconds late, in order to deliver quality of service for audio and video," he explained, adding that "a lot people are working with KVM to push those changes".
Other capabilities that are under incubation at OpenStack for the future include a domain name system management service that will provide administrators with APIs to manage different kinds of DNS back-end infrastructure. Also in the pipeline is a file system service to complement the block and object storage provided by the Swift and Cinder modules.
"OpenStack has block and object storage support, but if you are looking for access to NFS or CIFS file shares, you currently need to create a block device, mount it inside a virtual machine, and export it as a share," Bryce explained.
"What we're working on will provide a way to talk to different back ends that support those protocols and manage file shares that can be running on Netapp, Linux servers, Windows, or whatever.
"What we're seeing is the [OpenStack] community building those higher level services that customers need, that make use of the basic building blocks, and we'll continue to see that as we evolve."
For more information on the cloud, visit the Intel IT Center.
RTX 280 Ti will come with 11GB of fast GDDR6 video RAM with a 352-bit memory bus offering 616Gbps
The scale of jobs lost to automation will be at least as large as those in the first three industrial revolutions
Latest Tesla news: Tesla stock price tanks amid reports of 'widening probe' by SEC and claims the base Model 3 loses money
SEC 'probe' takes its toll on Tesla as new research suggests that Tesla loses $6,000 on every $35,000 Model 3
10nm Cannon Lake Core i3-8121U CPUs make a rare outing with Intel's NUC mini PC