In scientists' quest to understand the true nature of the universe and grapple with the mysterious dark matter and dark energy that seemingly accounts for 95 percent of the universe, they have built ever-more powerful and ingenious ways of studying the skies.
Over the coming years, this will allow researchers to construct simulations of the universe at previously inconceivable levels of detail. But to do so, those researchers will need mind-bendingly powerful computers and algorithms capable of extreme scaling. Now a US team of astrophysicists have published their plans to create the most detailed universe simulation ever.
The so-called Hybrid/Hardware Accelerated Cosmology Code (Hacc) provides a novel framework for cosmological simulation, which the team, comprising researchers from the Argonne, Los Almos and Lawrence Berkley US national laboratories, have shown is capable of generating simulations with 3.6 trillion particles. That is “significantly bigger than any cosmological simulation yet performed,” the team claimed.
To demonstrate the validity of their Hacc framework, the team grabbed some time on one of IBM's third generation of BlueGene supercomputers, the BG/Q. A single BG/Q rack contains 1,024 nodes, each with 16GB of DDR3 memory, and a BG/Q compute chip which uses 17 augmented 64-bit PowerPC A2 cores.
They were able to show that Hacc achieved massive scalable performance – 13.94 Pflops/s (quadrillions of calculations per second) at 69.2 percent of peak and 90 percent parallel efﬁciency on 1,572,864 cores.
Those are big numbers, but to give it some comparison, the IBM BlueGene Sequoia at the Lawrence Livermore National Laboratory was until recently ranked as the most powerful supercomputer in the world. In June 2012, it posted a Linpack benchmark of 16.32 Pflops/s.
Still, with that sort of power, the team are confident that Hacc will be capable of creating detailed simulations that could use measurements of weak gravitational lensing to map the distribution of dark matter throughout the universe, or of the distribution of galaxies and clusters, from the largest to the smallest scales.
The team are now overseeing the acceptance tests necessary to get Hacc operating on Livermore's Sequoia as well as on the Mira supercomputer at the Argonne National Laboratory.
The team's work was presented at the SC12 conference in Salt Lake City, Utah this week.
Mark Zuckerberg mercilessly trolled by Harvard student newspaper after return to university he dropped out of 12 years ago
'Unauthorised user' blamed by Harvard for insulting Mark Zoinkerberg
Android under attack from 'Judy', Google Play Store malware that has infected up to 36.5 million users
Yet more Android malware discovered on the Google Play Store
Airport believes new system will be more reliable than GPS or Google Maps
OnePlus 3T canned to make way for imminent OnePlus 5 with Snapdragon 835, 8GB memory and dual camera
OnePlus 3T to be prematurely retired on 1 June - perhaps indicating plans for an imminent OnePlus 5 launch