Yes, it is possible to measure the value being delivered. It's not perfect, because nothing in the real world is, but it's a lot better than having nothing. We've tried it out on clients, and they say it works. You just have to make sure that you do the appropriate thing.
In your approach, the key is to use the appropriate technique for the type of expenditure. You believe that all IT expenditure can be classified into just four types. Can you take us through them?
The first one is commodity services. That's pretty straightforward: email, the telephone, lighting and heating are all examples. The test is [whether] value is being delivered. You can tell if that's true just by taking it away. If you take it away and things grind slower and slower - or stop - then clearly it must be true.
But you don't know what the value is. So what you do next is look at service level agreements (SLAs). You find the service that customers want, and then all you have to do is deliver that in the cheapest way available.
So you ask the users to define the service they want?
Yes, and there's the rub. What we found, amazingly, when we talked to IT directors is that customers often don't know what they want. So the IT guys make the stuff up - they invent it. They say 'We think the users will accept this,' and then that's their own internal SLA.
Is there any tendency to over-specify; to say that you must have incredible response times or other things that you don't actually need?
You definitely don't under-specify, because if you do people get pissed off with you. So they always err on the charitable side - that's another problem.
With something such as email, what would matter is how many hours downtime per year the users could tolerate. People might say they don't want any downtime ever.
Then you'd say 'If you don't want any downtime ever it will cost you so much, but if you are prepared to tolerate one hour's downtime a month, then we can get this for a quarter of the price. So what do you want?' That's the loop you have to go through. But many IT departments just don't go through that loop at all.
Your second category is special processes, where the users are getting something that allows them to work in their company's own special way. Is it easier to find out if IT is adding value here?
This is the hardest one of the lot. The trouble is you have nothing to compare it with. You get this with SAP applications. People put in SAP with the whole thing 'parameterised' so it fits what they want, and then they say 'Are we paying too much for all this?'
The only way you'd really know is if you could compare it with someone else who is doing an identical thing. But you can't, because there is almost nobody doing an identical thing.
So what can you do?
First, you again require SLAs. If it's special, it's supporting a special process that is important to the way that you do business. So if you were Dell, the special process might be producing the online selling capability. There has to be a minimum level of service that this system has to produce. If you produce any more than that, then you could probably have done it cheaper - you could have knocked something out.
The other thing is that, because it's special, there must be a critical success factor [CSF] associated with it. [CSFs are defined as the small number of vital factors that must be satisfied for the organisation as a whole to prosper.] An example for someone such as Dell might be being able to sell more than 50 per cent of its goods online.
So not only does the SLA have to be met, but the system has to enable this other thing - the CSF - to happen. So you have these two things to measure. The problem is that many companies don't have SLAs, and may not know how to work out their CSFs.
Is it a good thing to have more than one CSF?
Yes, you'd have more than one, definitely. There'd be something to do with marketing, something to do with finance or whatever. If you are an airline, the CSF is yield on your seats, and you'd have systems that have been customised in a particular way to optimise it.
The difficult thing with these special processes is knowing if you are doing it as cheaply as possible. You know your systems must be delivering value because if you take them away the thing will crash. You can see if you are satisfying your CSFs, and just meeting the SLAs. But the only way to find out if you are getting it the cheapest is to find somebody else who is in exactly the same boat as you. This is very hard.
But are people that worried about cost?
Yes. Once it works, the next thing is cost. Every year, the finance director wants to spend as little as possible on cost - in almost every company.
When the IT budget comes up someone says 'Why do we do this, can we do this better, can we chop it out?' All those arguments come up, and the IT guy has to justify his existence again. Very few companies are in the luxury position of being able to ignore cost.
Your third category is strategic systems. What are they?
These are the things that nobody else is doing that will enable your company to power ahead with some new initiative. An example would be Tesco's home shopping or the Woolwich Open Plan banking service. You're not necessarily betting the whole company on it, but it's a major gamble - it's strategic.
Now, you might think these would be dreadfully difficult to monitor, because again there is nobody to compare yourself with. But they're not. A proper business case is always produced for these projects, and then it's followed like crazy right at the top. So you know exactly whether the whole initiative is working. If it were home shopping, for example, you'd know how much people are buying, [whether] the whole thing is profitable and so on.
So you know the overall benefit in some detail. But it's impossible to work out the value being delivered by IT in all that. The IT component is no more critical than many of the other things. So in the home shopping example, if you took away the delivery vans the thing would grind to a halt, and if you took away the IT it wouldn't work either. It's like the chain on a bicycle: you know it's necessary, but it's impossible to separate out the value being delivered by that particular component.
What if you make a change to it? For example, in the home shopping example, you might add wireless application protocol (Wap) - I think it's PC-based at the moment.
Yes, if you added Wap it would be an incremental change. So you could say we've added Wap, now let's see what's happened to the orders. You've changed one thing without changing the rest. But that's unusual.
What you can normally ask with these strategic initiatives is whether the thing as a whole is delivering value. You usually know exactly what the whole thing is delivering, because the whole company is watching it closely. The chairman is probably seeing a report every week.
Your fourth category is research and development (R&D).
All companies do R&D. The simplest way is trying something out. You might just buy something to see how well it works, like a flat panel display. Or you do a pilot - you just try it out and see what happens. Things go through phases. For example, with Wap phones, first you want to know whether they work at all, and then later you get companies such as the Woolwich piloting Wap services.
The question is, what's the return? Is it adding any value? The trouble with R&D is that it moves across years. If you look at what you spend this year the benefit won't appear until further on down the line. The benefits you get now come from what someone did in the past - if it worked.
But often what people do is look at what they spend on R&D now, and then they look for the benefit now! That's a mistake that many people make. We get hit like this as a company. Someone says: 'What are we paying OTR Group for, and what did we get?' And they don't know. But if they say 'What did we pay them five years ago, and what did we do as a result of that, and what happened?' you get a different picture.
So what you have to do with R&D is have a history of the expenditure and follow what the benefits were over the years. You have to spread it out properly. What you notice if you do that, is that as you get near the present there is no information at all. But you can see what benefit you received out of earlier R&D spending.
In a company, the guys responsible for R&D spending would have track records. If someone recommends something this year that has been successful in the past, you tend to believe them. It's a bit like an investment analyst - the same principles apply. But you have no idea what's going to come out of this year's spending. You'll only know that in the future.
So do these four categories - commodity services, special processes, strategic systems and R&D - cover everything in IT?
Yes. Everything you do must be in one of these four areas. There's sometimes a problem over which category to use. How do you know when a strategic initiative has turned into a routine special process such as the Dell thing, for example? Well, you don't really. You have to apply some judgement.
You take this stuff and make a summary to show what is going on. But you need to be doing this in a company which is ready to accept it. If the guy you are giving it to is disbelieving, you are wasting your time. You only measure something if there's a purpose to it and you can produce some sort of a result.
But provided you are not in a desperate environment where nobody wants to know, you can do something. This isn't rocket science, but it definitely does tell you if you are delivering value or not.
OTR Group - www.otr.co.uk - is an independent research and consultancy company with offices in London, Brussels and The Hague. It produces reports and runs regular discussion forums on IT topics for members.
French firm Blade offers a Windows 10 PC in the cloud, but is it good enough for high-end gaming?
Research into indium gallium phosphide could result in more powerful - and cheaper - electronic devices
Federal government to help US states improve their election infrastructure security
Acton's warnings come as Facebook is embroiled in one of the biggest data scandals in history