Application problems in the cloud are often solved by purchasing extra capacity, but this can lead to huge overspends and development managers should be responsible for controlling those costs.
This is one of the arguments made by David Stanley, head of platform delivery at Trainline, speaking at Computing's recent Cloud and Infrastructure Summit.
Stanley began by explaining how cloud costs can begin increasing and quickly spiral out of control.
"Cloud is an inherently unstable environment for your applications. I come from an infrastructure/operational background, and we've known for years that infrastructure hides application errors. You just can't rely on a steady stream of IOPS," he said.
Stanley added that in his career he has seen a number of occasions when more corporate funds needed to be invested in purchasing capacity from AWS to solve application problems.
"And if you start to see costs increased by $2,000 in a month, people start to ask why. So the role of the dev manager is changing to be more accountable for these costs," he explained.
"We sit with them every week to ask why we need this many machines, and of that size. Previously, the dev manager wouldn't care about that. So the bigger cultural change to go through after you've made all of your DevOps changes is to focus on costs."
Stanley said that in his experience some dev managers have taken well to the new responsibility, while others have found it a burden.
"One manager said he wasn't going to spend time cost saving, someone else should do that for him, and that's just not what we're looking for any more," he added.
Speaking at the same event, Simon Hazlitt, co-founder of financial services firm Majedie Asset Management, explained that the biggest risk of cloud consumption is bankruptcy.
Facebook database included text-message metadata - despite not using Facebook Messenger for SMS
Successful attack could result in harm to patients and financial loss, warns NHS governing body