The European Cern research centre is to build a huge data store as part of its quest to understand the most basic particles in the universe.
The organisation runs the 23km Large Hadron Collider, the biggest scientific instrument in the world, built over 10 years and costing €5bn.
This ring under Switzerland and France will collide particle streams at the speed of light, using four computer systems in the ring to monitor the results.
This is creating an enormous amount of data, up to 500Mbps, and the Collider will be run for 120 days per year, producing 15 petabytes of information annually. Not all of this will be stored, but at least eight petabytes will be kept.
"This poses some interesting storage problems," said Charles Curran, manager of Cern's IT department.
"No-one in the Linux community is writing code that allows you to handle that amount of data. We could have written it ourselves but that would seem to be a very bad idea as it takes a lot of time and would never be completed."
Cern has managed to overcome the problem using an application called Castor HSM which uses IBM and Sun Microsystems disc and tape storage at a cost of over four million Swiss francs per year in media and storage expenses.
But eight petabytes is just the start, and the team plans to expand further until they find the Higgs Boson particle they are looking for.
The Higgs Bosun, sometimes known as the God particle, is the holy grail of particle physics. It has never been irrefutably seen but if discovered would answer one of the key questions of the universe: why objects have mass.
Use the same password for every website? It might be time to change them all
Applicants for parking bay suspensions put at risk of credit card fraud by Islington Council
Robert Swan appointed interim CEO after Brian Krzanich's departure
Should you link your data sets to add value, or leave them separate to reduce risk?