Organisations - and their IT functions - have become so complex that IT and security teams have become obstructed in their efforts to protect their systems from hackers.
That's the warning of Dave Palmer, director of technology at Darktrace, speaking at Computing's Cloud and Security Summit this week.
"Lots of businesses are sufffering from similar problems," said Palmer. "The complexity of their digital business is exploding. Everyone and everything is becoming more unique in how they act. It's a nightmare for human-led enterprise security approaches.
"Defensive teams are starting to get outpaced," he continued. "Attackers have lots of places to hit, and people still want the latest technology and the opportunities that with it, also there's also lots of inertia and a long tail of old stuff to manage too."
He explained that IT leaders expect defence teams to imagine all possible future attacks, and implement defences without impacting operations.
"But it's more important to start thinking that defensive teams are battling internal complexity rather than external attackers," commented Palmer.
The solution, he suggested, is AI.
"AI is about new ways of handling complexity. Let the AI learn what's normal in your business, learn your complexity, and then your security teams can be notified when it sees something unusual happening. It's a self-learning immune system.
"It learns about every device and person in your business, listening in to network communications, znd it keeps up with the speed of real life. There's lots of AI cleverness out there, but it's no good if it tells you what happened a week ago, it needs to be more real-time."
Describing what his firm can offer, he explained that it uses visual story-telling to show what happens before, during and after an attack in terms security teams can understand, all put into the correct context.
"It learns from on premise networks, data centres, colocations, clouds, it can integrate with SaaS solutions, Office 365, DropBox, Box, Salesforce; wherever your data is as long as the product has a security API," he said.
Describing how it works, he said it uses a variety of machine learning techniques.
"It uses layers of probability theory to understand which machine learning models are working well, and filtering out those which appear to be behaving less effectively.
"But the security teams don't need to care about the mathematics behind it, or have a PHd in probability theory," he added.
Nintendo plans to manufacture up to 30 million Switch consoles next year
Kaspersky no longer legal on US public sector networks
Not even masses of patches for Adobe Flash this month
Joint venture mended following sale of Toshiba Memory Corporation to Bain-led consortium