James Hamilton, Amazon Web Services' (AWS) vice president and distinguished engineer, has rejected Oracle co-CEO Mark Hurd's claim that Oracle doesn't need to invest as much building cloud data centres as AWS, Google and Microsoft because it uses better hardware and software.
Hurd was responding to questions suggesting that Oracle was being outspent on cloud infrastructure, investing just $1.7bn on data centres to support its cloud ambitions, compared to the $31bn that Amazon, Microsoft and Google have collectively spent.
"We try not to get into this capital expenditure discussion. It's an interesting thesis that whoever has the most capex [capital expenditure] wins," said Hurd. "[But] if I have two-times faster computers, I don't need as many data centres. If I can speed-up the database, maybe I need one-fourth as may data centres."
But in a response posted on his personal blog, Hamilton claimed that Hurd's argument didn't stack up.
"I don't believe that Oracle has, or will ever get, servers two-times faster than the big-three cloud providers. I also would argue that ‘speeding up the database' isn't something Oracle is uniquely positioned to offer," he wrote.
"All major cloud providers have deep database investments but, ignoring that, extraordinary database performance won't change most of the factors that force successful cloud providers to offer a large multinational data centre footprint to serve the world," wrote Hamilton.
Hurd's comment, though raises the question of how many data centres a global cloud computing services company ought to build, he added.
"The most efficient number of data centres per region is one. There are some scaling gains in having a single, very large facility.
"But one facility will have some very serious and difficult-to-avoid full-facility fault modes like flood and, to a lesser extent, fire. It's absolutely necessary to have two independent facilities per region and it's actually much more efficient and easy to manage with three."
With three facilities, even if one goes down for any reason, the provider can still offer full redundancy to customers, for example, and can spread the load more easily.
At the same time, he added, a data centre cannot be scaled indefinitely due to issues of power consumption and bandwidth.
"The limiting factor is how big of a facility can an operator lose before the lost resources and the massive network access pattern changes on failure can't be hidden from customers.
"AWS can easily build 100-megawatt facilities, but the cost savings from scaling a single facility without bound are logarithmic, whereas the negative impact of 'blast radius' is linear. When facing seriously sub-linear gains for linear risk, it makes sense to cap the maximum facility size," argued Hamilton.
He added: "Over time this cap may change as technology evolves but AWS currently elects to build right around 32MW. If we instead built to 100MW and just pocketed the slight gains, it's unlikely anyone would notice. But there is a slim chance of full-facility fault, so we elect to limit the blast radius in our current builds to around 32MW."
Latency also remains a big issue, especially if a provider concentrates their data centres too heavily in either one region or one geographic area, Hamilton suggested.
"The speed of light remains hard to exceed and the round-trip time just across North America is nearly 100 milliseconds," wrote Hamilton, which is why data centres still get built in expensive locations, such as London, New York, Los Angeles, Hong Kong and Tokyo.
"Low latency is a very important success factor in many industries so, for latency reasons alone, the world will not be well served by a single data centre or a single region.
"Actually, it turns out that the speed of light in fibre is about 30 per cent less than the speed of light in other media, so it actually is possible to run faster [to communication data beyond the speed of light]. But, without a more fundamental solution to the speed of light problem, many regions are the only practical way to effectively serve the entire planet for many workloads."
Hamilton also suggested that networking eco-system inefficiencies also have a role to play, with many regions "underserved by providers that have trouble with the capital investment to roll out the needed capacity".
Computing's IT Leaders Forum 2017 is coming on 24 May 2017.
The theme this year is "Going Digital: Why your most difficult customer is your best friend".
Attendence is free, but strictly limited to IT Leaders. To find out more and to apply for your place, check out the IT Leaders Forum website.
Infected apps have been downloaded more than 50 million times
Customers of regular price-raising ISP and cable operator claim nationwide outages started on Monday
Pixel 2 smartphones and a Pixel-branded laptop also planned by Google
The moment you've all been waiting for...