Key value systems can be used in an innovative way to enable high performance computing (HPC) systems to operate with a lot of flexibility, scalability and performance, a new study conducted by researchers at Virginia Tech University has claimed.
In the modern world, high performance computing (HPC) systems, or supercomputers, play a significant role in computational science. Researchers use these systems to solve complex queries involving huge amounts of data; for example, in quantum mechanics, climate research and molecular modelling. Supercomputers can process such queries in very little time; modern supercomputers can perform nearly a quadrillion floating point operations per second (FLOPS), also known as a petaflop.
However, the storage platforms needed to build such HPC data systems have been limited by a framework that demands users choose between high availability or customisation of features.
Now, researchers at Virginia Tech claim to have developed a new framework called BespoKV, that might one day enable scientists to build supercomputers able to perform one billion billion calculations in one second (exascale computing).
BespoKV is based on the concept of key value (KV) systems, which store important data on fast memory-based storage rather than slower disks.
According to researchers, BespoKV's unique selling point is its ability to create several KV stores with desirable features. It takes a ‘datalet' (single-server KV store) and then creates ready-to-use distributed KV stores immediately.
The researchers also claim that BespoKV eliminates the need to redesign a system from scratch to complete a specific task, saying: ‘A developer can drop a datalet into BespoKV and offload the ‘messy plumbing' of distributed systems to the framework'.
This study is relevant to industries that process large amounts of data, such as large credit card firms, film streaming websites and social media .
"Developers from large companies can really sink their teeth into designing innovative HPC storage systems with BespoKV," said Ali Butt, professor of computer science.
"Data-access performance is a major limitation in HPC storage systems and generally employs a mix of solutions to provide flexibility along with performance, which is cumbersome. We have created a way to significantly accelerate the system behaviour to comply with desired performance, consistency, and reliability levels."
Findings of the study are being presented today at the Association of Computing Machinery/IEEE Supercomputing Conference in Dallas, Texas.
J1043+2408 was observed for more than 10 years, and its radio light curve exhibited a periodic signal repeating in about 563 days
Success of Unity's test flight means Virgin Galactic is now close to taking its first paying tourist into space
V3 puts the pro-level football GPS tracker through its paces, and asks if it's more than a gimmick
Finding refutes many earlier studies that suggest that galaxies don't have much dark matter at the time of their birth