The world is beginning to transition from the cloud era to the artificial intelligence (AI) era, as systems and networks grow and learn. But just as the web and cloud eras had their own threats, the same applies to this new landscape - and it is AI itself.
Excitement and confusion abound over AI, but despite - or perhaps because of - this, the technology can pose a real danger to computer users.
"As machine learning matures into AI, nascent use of AI for cyber threat defense will likely be countered by threat actors using AI for offense," Rick Hemsley, managing director at Accenture Security, told us earlier this year.
So on the face of it, IBM's development of DeepLocker - ‘a new breed of highly targeted and evasive attack tools powered by AI' - seems like it sets a dangerous precedent.
There is method to the madness. IBM reasons that cybercriminals are already working to weaponise AI, and the best way to counter such a threat is to watch how it works.
While normal malware can be ‘captured' and reverse-engineered to figure out what makes it tick (and thus build a vaccine), it's much more difficult to analyse how a neural network reaches its decisions.
The company built DeepLocker to understand how existing AI models can be combined with malware techniques to create a new type of attack. Its proof-of-concept tool hides itself in other applications until it identifies its victim: when that unlucky individual is tagged (through indicators like facial recognition, geolocation and voice recognition), the malware strikes.
The AI model will only ‘unlock' the malware to begin the attack if it identifies certain criteria; these can be based on any number of attributes, including visual, audio, geolocation and system-level features. It's almost impossible to identify all possible triggers, making reverse-engineering the deep neural network (DNN) a difficult prospect.
Making it work
Most firms don't willingly install WannaCry on any system, but that's exactly what IBM did to test DeepLocker. The firm hid the ransomware in a video conferencing application so that it couldn't be detected, and trained the AI model to unlock it based on facial recognition.
When the DNN saw the right person in front of their PC, through a webcam (remember, video conferencing), it provided the key to open the payload and lock down the victim's system.
The clever part of IBM's work is that it has turned a traditional weakness of black box AI - the fact that you can't see inside to understand how it reaches its decisions - into a strength.
"A simple ‘if this, then that' trigger condition is transformed into a deep convolutional network of the AI model that is very hard to decipher," wrote IBM's Marc Stoecklin. "In addition to that, it is able to convert the concealed trigger condition itself into a ‘password' or ‘key' that is required to unlock the attack payload."
IBM will be discussing its work at Black Hat USA 2018 today.
Latest Tesla news: Tesla stock price tanks amid reports of 'widening probe' by SEC and claims the base Model 3 loses money
SEC 'probe' takes its toll on Tesla as new research suggests that Tesla loses $6,000 on every $35,000 Model 3
10nm Cannon Lake Core i3-8121U CPUs make a rare outing with Intel's NUC mini PC
'Notorious' Australian child hacker thought he had executed 'flawless' hack
The former employee says that Tesla fired him for bringing the accusations to management internally