HGST has showcased a new memory architecture that it claims could deliver the performance and scalability required for data centre computing, especially applications using large in-memory datasets.
Demonstrated at the Flash Memory Summit 2015 in Santa Clara, HGST's Persistent Memory Fabric combines a high-speed form of Phase Change Memory (PCM) with Remote Direct Memory Access (RDMA) technology from network firm Mellanox to build an in-memory compute cluster that delivers a large total memory space while reducing the power consumption that would result from using standard DRAM memory.
However, this is currently only a demonstration platform by HGST Research, and the firm has given no indication about when commercial products will bring the technology into the data centre.
HGST claims that the PCM technology is fast enough to deliver DRAM-like performance, but at a lower total cost of ownership thanks to power savings. This is because it is non-volatile, storing bits via a change in the state of the material that the memory is made of, doing away with the need to continually refresh the state of the memory as with DRAM. Refreshes can account for as much as 20-30 percent of the total energy used by a server, according to HGST.
HGST gave a PCM demonstration last year, claiming that its implementation was capable of three million input/output operations per second when configured and used as a solid state drive technology.
The RDMA piece of the puzzle comes in because this technology enables networked computers to seamlessly access each other's memory. It effectively combines the memory of a cluster of compute nodes into a single large memory pool.
"Taking full advantage of the extremely low latency of PCM across a network has been a grand challenge, seemingly requiring entirely new processor and network architectures and rewriting of the application software," explained Zvonimir Bandic, manager of storage architecture at HGST Research.
"Our big breakthrough came when we applied the PCI Express peer-to-peer technology, inspired by supercomputers using general purpose GPUs, to create this low latency storage fabric using commodity server hardware. This demonstration is another key step enabling seamless adoption of emerging non-volatile memories into the data centre."
Mellanox supports this technology in its ConnectX data centre network adapters, and the two firms claim an access latency of less than two microseconds for 512 byte reads, and throughput exceeding 3.5GBps for 2KB block sizes using RDMA over InfiniBand.
"Mellanox is excited to be working with HGST to drive persistent memory fabrics," said Mellanox vice president of marketing Kevin Deierling.
"With this demonstration, we were able to leverage RDMA over InfiniBand to achieve record-breaking round-trip latencies under two microseconds. In the future, our goal is to support PCM access using InfiniBand and RDMA over Converged Ethernet to increase the scalability and lower the cost of in-memory applications."
HGST is not the only firm experimenting with novel memory technologies. Intel and Micron unveiled a new memory architecture last month called 3D XPoint (3D cross-point) that blurs the boundaries between RAM and storage, offering high-density, non-volatile storage of data with access speeds closer to that of main memory.
Intel wants to get inside your car, despite missing out on mobile
'We'll keep fighting to fight to keep the web free and open,' claim EFF
Breached in March by the same attackers, claim 'insiders'
And all for less than £150, according to Keith