As AI accelerates both cyber threat and defence, where do we need to place our bets now to avoid being outpaced?
It’s already clear that AI is reshaping both cyber attack and cyber defence at pace, with the threat now extending well beyond traditional phishing.
Executives are being impersonated in real time with cloned voices. As vulnerability discovery is being automated faster than patch cycles allow and highly targeted attacks can now be generated at industrial scale with minimal effort, recent reporting around Claude Mythos has only reinforced the growing capability of AI systems as a threat.
What’s also now certain is that AI-enabled attacks are no longer theoretical. They’re already changing the scale, speed and sophistication of cyber threat activity in real time.
The harder conversation is internal.
While much of the focus remains on external threat actors, one of the strongest themes emerging from our roundtable discussion was the growing internal challenge many organisations are creating for themselves as they deploy AI tools without fully understanding the attack surface they are introducing. Sensitive data being entered into prompts, shadow AI adoption by staff and the potential for prompt injection and model poisoning are all now real issues.
Exacerbating this further is the fact that many organisations are moving faster on adoption than governance, policy or security oversight.
What needs to happen next?
Our roundtable discussion highlighted several areas where organisations will need to adapt quickly if they are to keep pace with the evolving threat landscape.
One of the clearest themes was the growing importance of identity and behavioural detection. As AI-generated attacks become more sophisticated, there is consensus that traditional signature-based approaches are likely to become less effective far more quickly than many organisations have planned for. At the same time, the discussion also reinforced the continued importance of human judgement. Automation can handle scale and speed, but critical decision-making still requires trained people in the loop operationally and particularly during live incidents.
The ability to share and act on threat intelligence quickly was also identified as increasingly important. As attacks become more automated, the advantage is likely to sit with those organisations who are able to aggregate, interpret, understand and act on intelligence fastest.
The discussion also highlighted the growing need to secure the AI stack itself, including mitigating the risks associated with prompt injection, model poisoning and unmanaged AI adoption, areas many organisations are still underestimating.
So as AI accelerates both attack capability and defensive tooling at a pace many organisations are struggling to match, the challenge is no longer whether AI will reshape cyber security, but whether organisations are adapting quickly enough to avoid being outpaced by it.