AI is a threat to your security!
AI is the answer to your security problems!
For me, only one of these statements is true. I won’t waste your time by delving too deeply into what we already know about AI, namely, that it will almost certainly empower threat actors to be more sophisticated, effective and believable.
However, I would like to talk about the idea of AI as a cybersecurity ‘silver bullet’. There’s a lot of noise about how we can leverage AI in security and, whilst I agree its use will be beneficial – in fact, Armor already uses AI for things like the automation of repetitive security tasks – what concerns me is the inference that AI, like some of the security products and services hailed before it, could become a standalone solution which will somehow negate the requirement for an effective Security Operations Center (SOC).
This is what the threat actors want. For us to absolve ourselves of responsibility, accountability and judgement and put our trust in a tech solution.
The reality is that the same reasons that make AI a compelling threat are the very reasons that should make us pause and proceed with a good deal of caution and scepticism when being offered an ‘AI Security Solution’.
Why?
1. Dirty data
AI models are fed by data. They can process huge amounts of data at speed. Brilliant – depending on the source and quality of that data. If the data is contaminated or unreliable the results can be biased and can even create a new attack surface for threat actors to exploit. Once that data is in, it’s effectively a black hole. Complex AI models often make decisions in ways that humans can’t easily understand. You can’t control how it comes to its conclusions and it’s also open to manipulation. What if a potential attacker tricks the machine learning model into misclassifying threats to enable them to breach you undetected? When you add data to an AI model, even if it’s one designed to bolster your security, you really need to understand what you’re putting in AND what you’re potentially allowing in.
And hey, sometimes the AI doesn’t need any external help in creating a bad outcome. Amazon’s attempt to lead the world in AI driven recruitment almost managed to set the company’s diversity objectives back decades. Trained to find applicants by following patterns in resumes/CVs that had been received over the preceding decade, the high numbers of men in tech meant that the AI ‘learned’ that males were a preference for the company and started to penalise any reference to women or female. Amazon thought it was giving the AI the responsibility, but it didn’t, it gave it the authority to act on its behalf. The result was still Amazon’s problem to fix.
2. Humans
The new generation of AI threats, whether it’s a Business Email Compromise attempt or a deep fake call using a ‘human voice’, all still require a human touch point. Somebody to click on the email, act on the call, download the virus. On the threat side, humans can often be the weak link, so for some people the obvious answer lies in removing them to make things safer. Let the AI decide instead.
But AI, although clever, cannot contextualise. Unlike humans, it lacks situational awareness, judgement and the ability to prioritize. It doesn’t understand the nuances of the wider environment it’s being used in, the industry or market context. Although sometimes we seem in a hurry to replace ourselves, humans are actually very useful – assessing threat level in context, prioritizing based on risk and adjusting accordingly.
Behind every attack is a human. Ready to change their method, evolve their approach. If we try to counter that with an AI tool, we leave the attackers fighting an easy enemy incapable of critical thinking.
And yes, an AI security model may well be able to process massive amounts of data and flag threats with amazing speed. But then what? If your team is anything like the IT teams we work with, they are already drowning in alarms and alerts. They need guidance and support, not another data source without direction.
3. Distraction
Just as the threat actors will try to rush you, distract you and manipulate you, don’t allow yourself to be distracted from your security goals by the shiny thing that is AI. Your risk hasn’t changed. The threat is just faster, stronger and harder. What AI will do is expose your lack of risk strategy. It will increase the pressure on your internal IT security team – and your external SOC – but it will not replace them.
For us, at Armor, our XDR and SOC offering will continue to evolve – as it has done in the face of every emerging threat – to effectively manage, integrate, and understand the output of AI systems. Our job just got harder, but it’s far from redundant.