“While there is naturally a concern that AI will replace jobs, I think that for the most part, this is unlikely to occur in the area of cybersecurity.  I think the huge opportunity here is to automate the drudgery of laborious analysis tasks, investigating false positives by hand, manually pulling data sources together.  If we get this revolution right, this will supercharge our smart people, allowing them to get far more done with far less than ever before.” 

Alastair Paterson is a cyber security entrepreneur. He is currently CEO of Harmonic Security which he co-founded in 2023 to help enterprises accelerate the adoption of AI without worrying about the security and privacy of their sensitive data. Previously, he co-founded Digital Shadows in 2011, a leader in threat intelligence and digital risk protection. Digital Shadows was acquired by ReliaQuest for $160m in 2022. It garnered multiple awards and had a global presence with offices in London, San Francisco, and Dallas. Alastair holds a first-class MEng degree in Computer Science from the University of Bristol and has a background in cyber security and risk management, working with secure government and Fortune 1000 clients. Before Digital Shadows, he was with BAE Systems Applied Intelligence, focusing on national security clients. 

Enjoy the following interview with Alastair where part of the focus is on the implications of AI on cyber security. 

How do you perceive the role of AI in the future of cybersecurity defense mechanisms?

This is a watershed moment in both attack and defence for cybersecurity.  Sophisticated spear-phishing attacks that were previously time-consuming to launch can now be automated en-masse.  Deep fakes are already appearing, subverting identity controls. Traditional defences will be easily overwhelmed.  But there is also a surge of innovation for the defenders and a new set of cybersecurity companies, born in the Gen AI (generative artificial intelligence) era will emerge to defend enterprises from these attacks.   

What are the ethical implications of using AI in cybersecurity, and how should organizations address them? 

In general, the use of Gen AI in cybersecurity should help us do more of what we have always done, faster and more efficiently.  But, as with all AI deployments we need to be cognisant of some of the inherent weaknesses in this technology that have ethical implications – for example, hallucination and biases.  It is key to understand how AI suppliers have addressed those issues in their data sets, especially for anything touching personal data they hold about individuals. 

How can organizations ensure compliance when integrating AI into their cybersecurity strategies?  

The regulatory environment is moving very quickly around AI, but the same old regulations such as GDPR may also be applicable here for solutions that involve handling customer data, so we need the same data hygiene practices and third-party risk assessments we have had in place for many years.  The EU AI Act itself is worth tracking and will have many implications, but mostly on companies building models, not the average security team defending their organization from attack.  As always, tread cautiously but regulation is not a reason to not engage with new cybersecurity technologies. 

Can you share insights on the most effective AI-driven threat detection and response strategies? 

The ability to automate the drudgery of existing human analysts’ work is going to have a dramatic impact on our defences because analysts will now have time to focus on the real threats to their business, improving threat detection and response as a result.  In addition, some Gen AI techniques show promise in identifying risks that were previously undetected by enterprises.  

In what ways can AI be a game-changer in incident response and crisis management?  

AI has the potential to dramatically reduce response times in incident response and summarise complex situations quickly for decision makers in a crisis.  It will take time to make that work fluidly, but there are a lot of smart teams working on these problems and I believe by 2025 we will see some significantly different approaches proven out in the SOC.  

How do you balance the potential of AI in cybersecurity with the risks it introduces? 

While we are all excited about the potential for Gen AI in security, we need to tread carefully while the technology proves itself.  In particular, giving automated decision-making power to LLMs is something to be very careful about.  They have proven inherently vulnerable to prompt injection attacks that can modify their behaviour, so the only way I would deploy an LLM would be in a form where access to it was strictly controlled and monitored and no external third parties had access.  

How can organizations stay ahead of AI-powered cyber threats?  

Threat intelligence about new AI risks is just as important as it was about traditional threats.  On top of that, security leaders should be investigating, trialling and testing new defence mechanisms born in the Gen AI era to help educate themselves as to what is coming and will be possible in both attack and defence. 

What is your approach to evaluating and integrating new AI tools into security operations?  

With all the excitement around Gen AI it is still important to be patient with new technologies as they mature.  Take a structured approach to experimentation, working out the top use cases to look at first.   

How do you see AI transforming the role of cybersecurity professionals in the next decade?  

While there is naturally a concern that AI will replace jobs, I think that for the most part, this is unlikely to occur in the area of cybersecurity.  I think the huge opportunity here is to automate the drudgery of laborious analysis tasks, investigating false positives by hand, manually pulling data sources together.  If we get this revolution right, this will supercharge our smart people, allowing them to get far more done with far less than ever before.