Cybersecurity experts talk to Code Red Communications’ Robin Campbell-Burt about the challenges and opportunities of AI in the sector for the coming year.
There’s no doubt that artificial intelligence (AI) has made its mark this year. From AI-powered protein-folding models managing medical mysteries, to autonomous vehicles now in use in multiple cities, the pace of AI and machine learning (ML) innovation around the world has been staggering.
But some would say the hype, or as Mike Britton, CIO of Abnormal Security puts it, “gold rush”, for AI is well and truly over. And with great advancement, often comes great risk, with AI-powered cyberattacks and deepfake scams reaching unprecedented levels of sophistication.
“AI-enhanced threats will take many forms, from phishing emails generated with flawless grammar and personal details to highly adaptive malware that can learn and evade detection systems,” says Merium Khalid, director SOC Offensive Security at Barracuda.
Khalid is not alone in her thinking, as Pedram Amini, chief scientist at OPSWAT believes that next year, “ML-assisted scams will increase significantly in their volume, quality and believability”.
But what kind of AI issues should we be most worried about as we head into the new year, and what impact will this have on organisations and the industry alike? We asked a range of industry experts this very question, to help you feel more prepared for the year ahead.
A new wave of AI risks and threats
“In 2025, we expect to see more AI-driven cyberthreats designed to evade detection, including more advanced evasion techniques bypassing endpoint detection and response (EDR), known as EDR killers, and traditional defences,” Khalid argues.
“Attackers may use legitimate applications like PowerShell and remote access tools to deploy ransomware, making detection harder for standard security solutions.”
On a more frightening note, Michael Adjei, director of systems engineering at Illumio, believes that AI will offer somewhat of a field day for social engineers, who will trick people into actually creating breaches themselves: “Ordinary users will, in effect, become unwitting participants in mass attacks in 2025.
‘Too much noise drowns out the real threats’
“Social engineers will exploit popular applications, social media features and even AI tools to deceive people into inadvertently running exploits for web-based or script-based vulnerabilities.”
Adjei expects that “attackers will employ a dual-use strategy, where a legitimate tool or application operates as expected but harbours malicious intent in the background”.
“This approach will make victims appear culpable in potential mass exploitation incidents, enabling the true attacker to remain concealed in the shadows.”
It’s not all doom and gloom however, with experts still hopeful of the potential AI can offer us.
AI and the future of education
Suraj Mohandas, VP of strategy at Jamf, feels that AI is a double-edged sword when it comes to educating tomorrow’s professionals, as they’re “seeing a fundamental shift in how technology and mobile devices are being utilised in the classroom”.
The levels of improved teaching that AI can provide is truly exciting. “Administrators and teachers have moved beyond teaching technology skills (and having to be taught technology skills themselves) to using technology to enhance learning across all subjects,” Mohandas said.
However, these benefits don’t come without risks, argues Mohandas. “A major downside of AI is that attackers are leveraging the technology to step up the speed and specificity of their attacks.
“The attacks are getting more and more targeted, and the more student-specific data attackers can get their hands on to fuel the specificity of their attacks, the more attacks they’ll launch … and the more successful those attacks will be.”
In order to keep students safe, Mohandas believes there’ll need to be “a strong push for more safety mechanisms to be installed on student devices, specifically when it comes to data protection, threat prevention and privacy controls”.
“Educational institutions will be encouraged (or perhaps required) to improve encryption protocols and access controls, use AI-powered threat detection to fight AI-powered attacks, use systems that provide real-time alerts, and step up their game when it comes to student data privacy.”
The education sector will be empowered by AI, and simultaneously need to ramp up their defences against AI-driven attacks to stay safe, but what about businesses?
‘Orgs need to be ready’
Max Vetter, VP of cyber at Immersive Labs, says that “organisations need to be ready”.
“With greater adoption of AI will come increased cyberthreats, and security teams need to remain nimble, confident and knowledgeable.”
Similarly, Britton argues that teams “will need to undergo a dedicated effort around understanding how [AI] can deliver results”.
“To do this, businesses should start by identifying which parts of their workflows are highly manual, which can help them determine how AI can be overlaid to improve efficiency. Key to this will be determining what success looks like. Is it better efficiency? Reduced cost?”
Meanwhile, Ori Bendet, VP of product management at Checkmarx, believes it’s important to fix what matters most. “Too much noise drowns out the real threats,” Bendet says.
“Next year will see more organisations focusing on consolidating their stack to reduce complexity and the noise. If you can’t fix everything – which in terms of cybersecurity is the reality that most organisations are faced with – then you need to focus on fixing what most matters to your business.”
Cyberattacks are expensive, and their increasing frequency, partly fuelled by AI, means that organisations will also need to consider future costs and regulatory requirements imposed by governments.
Pierre Samson, co-founder and CRO of Hackuity, believes it will be vital to find a balance. “Hitting the big cybersecurity compliance deadlines – NIS2 and DORA – was top of the agenda for many organisations in 2024 (and still will be in 2025). This meant devoting significant budgets where it was most needed to meet the requirements,” Samson says.
“One of the biggest challenges for next year will be balancing cybersecurity spend: ticking the boxes on compliance while addressing the security gaps that matter most for each individual organisation. Compliance demands, whilst absolutely necessary, shouldn’t distract security leaders from focussing on these more strategic issues.”
It’s clear then that the rapid advancements in AI present both golden opportunities and formidable challenges. While AI continues to revolutionise industries, enhance efficiency and transform education, it also exposes new vulnerabilities that cyber criminals will be quick to exploit.
As we look ahead to 2025, we must all prepare for a landscape where AI-driven threats become more sophisticated, targeted and pervasive than ever before. Proactive cybersecurity measures, prioritisation and leveraging AI to counteract its own risks will be key to navigating this winding path with resilience. The future of AI is undoubtedly exciting, but vigilance and adaptability will be key to ensuring it remains a force for good.
Robin Campbell-Burt is CEO of Code Red Communications. With more than 20 years’ experience in public relations, Robin leads specialist cybersecurity PR agency, Code Red Communications, working with some of the biggest companies in the sector, as well as upcoming innovators entering the space for the first time.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.