Forget Year 2000. Forget European Monetary Union. The biggest problem in organisations today is data, or more precisely whether all the data being collected in company databases day in, day out, is actually useful.
The data relevance problem was highlighted in a recent survey by Benchmark Research. The survey, commissioned by Cognos, aimed to assess the perception of data warehouses among users and IT specialists in Times 1000 companies.
Data quality and integrity came high on the list of drawbacks.
No doubt IT professionals are genuinely afraid of what they might find when moving corporate data, dispersed throughout the organisation, into one massive data warehouse.
There is growing concern that over the years, generation after generation of users have interpreted application data slightly differently. Legacy applications running most large organisations rely heavily on humans to input data from terminals or PCs onto the systems. And when it comes to data entry, no matter how powerful the mainframe or mini, there is no getting away from the element of human error or, worse, the subjective judgement of a keyboard operator.
Data entry is not among the world's most glamorous professions - it's the white collar version of the sweatshop. But these jobs still exist, despite the numerous technological advancements of recent years, and it is also easy to overlook their importance in a company.
Managers assume 100% accuracy from their data entry staff 100% of the time. While most of us can get away with a bad day once in a while, as long as it's not too frequent, data input staff can't. Any mistakes end up on company databases, to sit for all eternity unless spotted and corrected.
But such mistakes are often impossible to spot, especially in large databases, in which case you're talking about finding needles in a multi-gigabyte haystack.
Over the years errors in corporate databases have accumulated to such an extent that companies risk a complete meltdown in mission-critical applications unless they tackle the problem soon. But the scale of the problem is immense. Every record in every database will need to be checked.
Although some of the work can be automated, much of it requires manual operation. If the Year 2000 bug is set to cost about $70 billion, cleaning up data is likely to cost several orders of magnitude more.
So now companies not only have to fork out a couple of million to fix the Year 2000 problem, and a couple more million to make IT systems handle EMU, they have to find an even larger pot to clean up the databases. Try telling that to a CEO, when all he can see today is a database working perfectly.
About 1% of organisations will collapse as a result of the Year 2000 issue. How many will go under simply because the data on which they run their business is incorrect. How many have gone under already?
Using photocatalysts to convert carbon dioxide into usable energy such as methane or ethane
Trained on curated data from Moorfields Eye Hospital, the neural network also shows clinicians how it reached its judgement
Yokohama National University demonstrate technology that could lead to a fault-tolerant universal quantum computer
Top-of-the-range Threadripper 2990WX now available from Scan, Ebuyer, Overclockers, Novatech and Amazon