Servers have been a key part of the data centre for so long that it is easy to see them as an immutable part of the IT landscape. They might get a faster processor, more memory or more storage with each hardware generation, but the underlying architecture has changed remarkably little since the days of the first PC servers in the 1980s.
That could soon change, as developments in the data centre related to cloud computing and software-defined infrastructure are driving a move towards more modular and less monolithic servers. Rather than being the 'atomic' building block of the data centre as they are today, tomorrow's servers may be just a collection of hardware resources that form part of a larger system that could extend across the entire data centre.
Cloud computing aims to deliver an IT platform where applications and services can simply request the resources they need to run and have them provisioned on demand. However, server nodes represent discrete quantities of processor cores, memory and storage, and these often do not match up directly with the requirements of the application.
This leads to some resources being underused, such as spare processor cores or memory, which in turn can lead to energy being wasted, a growing problem for large data centres with thousands of servers.
Now, two projects are working on ways to "disaggregate" server nodes, so that the compute, storage and memory resources can be pooled across an entire rack and then served exactly as individual workloads require them. The first is dReDBox, a consortium involving the University of Bristol, which is funded through the EU's Horizon 2020 programme, while the other is Intel's Rack Scale Architecture initiative, which involves many leading server vendors.
The dReDBox project
The dReDBox programme is a three-year research project that aims to investigate how servers can be carved up more effectively, which for starters means re-examining the way processor chips and memory are connected.
"The main goal is to design and prototype what we call a disaggregated architecture employing pooled resources which you can plug and play and hook together as many as you like to create any computer architecture," Dr Georgios Zervas, senior lecturer in the Department of Electrical and Electronic Engineering at Bristol University, explained.
The memory in existing computer architectures is typically attached directly to a processor chip. Each processor chip will typically have multiple cores these days, and be connected to many gigabytes of memory. This has worked well for several years, but is fairly inflexible, according to Zervas.
"When we come to the world of cloud and virtualised resources, when we deploy virtual machines or workloads, some workloads require a lot of processing power but not so much memory, and vice versa. But because of this direct integration between CPU and memory, some of the servers are going to end up with a lot of underused resources, whether that's memory or processors," he said.
IBM and Technical University of Munich team demonstrate how Shor's algorithm, which can't be cracked by conventional computers, can be solved quickly with quantum computing
Hubble Space Telescope finds superflares from young red dwarfs could strip away planetary atmosphere
Younger stars are 100 to 1,000 times more energetic than when they're older
Two of the big four supermarkets will use the system to control sales of restricted products
PUBG news and updates: November's Update #23 to bring new Skorpion pistol and changes to blue zone visibility
Genuinely useful side-arm coming to PUBG in Update #23