Many organisations are looking at how they can take advantage of the surge in cloud computing, whether by implementing their own on-site infrastructure-as-a-service private cloud, using resources from a public cloud provider, or a combination of the two.
The whole notion behind cloud computing is to turn IT delivery into more of a utility-like service, using automation and self-service portals to offer quicker and easier access to the resources needed for new projects.
At a basic level, services are still delivered by applications like a web server or database running on one or more servers. The trick is enabling these to be provisioned quickly with minimal fuss, but in a way that allows the service to scale up if demand grows.
This typically involves running them inside virtual machine instances that can be created quickly from a template, rather than dedicated physical machines.
However, the traditional way of building infrastructure, with a cluster of servers, networking and a storage area network all configured for a specific application, is not best suited for handling virtual machines.
A more optimal approach is to use banks of commodity servers with fast locally attached storage, a model pioneered by cloud service operators such as Facebook and Google.
This idea has been taken up by firms such as Nutanix, which sells compute appliances based on x86 server hardware with integrated tiered storage and software to create a single storage pool across a cluster of nodes.
VMware has also taken up this idea with the Virtual SAN software for its vSphere platform (see diagram above), which creates a storage pool using clusters of servers fitted with hard drives and solid state disks.
This is known as software-defined storage and is typified by Red Hat's Storage Server, which the firm has integrated with its OpenStack cloud distribution.
Meanwhile, servers are also changing. Vendors have started to add greater internal storage capacity, while higher density form factors such as micro-servers have appeared.
Dell's recently launched PowerEdge FX line emphasises flexibility, letting customers mix and match compute, storage and network modules to get converged infrastructure in a single enclosure.
While this converged approach is ideal for new build infrastructure, many organisations have to work with what they already have.
For this reason, EMC introduced the Vipr platform, which is able to function as an abstraction layer for storage arrays from multiple vendors and serve them via a single point of control and access.
But for full automation of the IT infrastructure, networking needs to be virtualised as well as the storage and compute functions. This is a thorny area, as there are different approaches to software-defined networking (SDN).
One approach, typified by VMware's NSX technology, is to let the hypervisor create virtual network connections between virtual machines, reducing the physical network to little more than pipes for forwarding data between host servers.
An alternative approach, favoured by HP, is the OpenFlow model. This is a protocol that can be used dynamically to configure physical switches or routers, typically under the control of a management or orchestration tool.
Even without SDN, running virtual machines calls for changes in network architecture, with more east-west traffic between servers than the typical north-south flow in a traditional data centre.
Another trend is the so-called ‘cloud in a box' approach to building cloud infrastructure. This sees a vendor or systems integrator supplying a rack of ready-built servers, storage, networking and software to slot right into a customer's data centre.
The Vblock products from the VCE consortium founded by VMware, Cisco and EMC are typical. VMware also has a separate initiative with its EVO:RAIL and EVO:RACK platforms, which package its software stack on hardware delivered by partners.
Microsoft has a similar product in the shape of its Cloud Platform System, which runs Window Server 2012 R2 and the Windows Azure Pack on Dell hardware.
All of this is building to the key piece of any cloud deployment: the management layer that provides automation, monitoring and orchestration capabilities for the infrastructure and services.
VMware is regarded as the leader in this area, its vCloud Suite providing the broadest range of management and policy-based automation features. However, other cloud software vendors are working to catch up.
The open source OpenStack framework has matured quickly over the past four years, for example. This is proving popular because of its modularity, allowing it to plug into other software such as VMware's hypervisor.
OpenStack has some key industry backers, such as HP, which has based its Helion cloud platform on the system, and Intel, which is working on exposing sensor data from its Xeon server hardware to the management layer to enable better monitoring and management of infrastructure.
Microsoft has a potential advantage for companies with a large estate of Windows servers, as it has evolved its System Center management suite with cloud orchestration capabilities.
It also enables customers to use Active Directory to manage resources from the firm's Azure public cloud service as if they were on-premise.
Worthy of mention is Red Hat, which has supplemented its OpenStack distribution with tools called Satellite and CloudForms, which between them provide portal, chargeback and metering for a private cloud and any resources on public clouds such as Amazon Web Services.
Overall, building your own cloud is not a trivial task, whether you start from scratch or adapt existing infrastructure.
Whichever path is taken, organisations may find that their choice of platform hinges on existing investments, such as whether they are already using VMware for virtualisation, for example.
BT wants to make the public switched telephone network history within eight years
Personal data being purloined by third parties via Facebook Login API
MacOS and iOS are better off apart, says CEO Tim Cook
Or they'll no longer be entitled to updates and bug patches