CINC 2024: How will AI affect the world of cybersecurity?

39 minutes ago

Image: © Daniel Berkmann/Stock.adobe.com

At Cyber Ireland’s annual cybersecurity conference, experts discussed the implications of AI on the threat landscape and the power of data.

Yesterday (26 September), Cyber Ireland hosted its annual cybersecurity conference for 2024 at Lyrath Estate Hotel in Kilkenny. The day-long Cyber Ireland National Conference (CINC) featured a host of presentations and panels from a variety of highly regarded figures from the sci-tech world, all dealing with the major cybersecurity trends of today.

A popular topic in cybersecurity at the moment is how artificial intelligence will affect the sector, both in terms of threats and defence. A Techopedia report from earlier this year highlighted the complicated relationship between AI and cybersecurity, as the disruptive tech can be used to both boost cyberattack capabilities while also helping defenders to spot threats quicker and more effectively.

Delving into this complicated relationship further were a panel of experts at CINC, exploring topics such as the importance of awareness and how artificial intelligence – particularly generative AI – might change the threat landscape.

Lowering the barrier

“The history of cybercrime has always been a race,” said Senan Moloney, the global head of cybercrime and cyber fraud fusion at Barclays. This race between attackers and defenders, according to Moloney, is based on two parameters: pace and scale.

One of the major ways that AI can give cybercriminals a leg up in this race is its ability to lower the barrier to entry for cybercrime. As Moloney explained, threat actors can overstep traditional requirements for cybercrime, such as extensive knowledge of programming languages or systems, through simple and “natural” communication with advanced AI.

As for the attack methods themselves, the panel discussed how AI-based cyberattacks such as deepfakes are growing in sophistication.

Stephen Begley, proactive services lead for UK and Ireland at Mandiant, described how he and his team conducted a red team exercise – a cyberattack simulation to test an organisation’s defence capabilities – where they replicated a senior executive’s voice using AI technology and made calls to various colleagues with requests. Begley said that the pretend cyberattack succeeded, as the targeted employees fell for the deepfake voice.

This incident highlights the importance of education and the upskilling of employees to recognise the capabilities of AI-driven attacks and how they can be used to infiltrate an organisation. As Moloney put it, without the proper education concerning this tech, “you won’t be able to trust your own senses”.

AI literacy

The importance of adequate education, specifically AI literacy, was one of the most prominent talking points of the panel. Begley warned that, without proper AI literacy and awareness, people can fall into the trap of anthropomorphising these systems. He explained that we need to focus on understanding how AI works and avoid attributing human characteristics to AI tools.

The focus should be on understanding AI’s limitations and how the tech can be abused.

Understanding the limitations and risks of AI also needs to be a whole-of-organisation requirement. Senior executives and boards of management need to know the risks just as much as everyone else, according to Dr Valerie Lyons.

Lyons, the director and COO of BH Consulting, talked about how company leaders tend to jump on the AI bandwagon without fully understanding the tech or the need for it. “AI is not a strategy,” she explained, adding that companies need to focus on incorporating AI into a strategy rather than making it the focal point.

Accurate, not smart

As with any in-depth discussion of AI, there’s always the risk of panic. AI is, of course, a key concern for a lot of people, especially due to predictions that the tech will replace some human jobs.

Despite differing opinions on the scale of potential job losses, there was an agreement that at the very least, AI will change certain jobs. Moloney spoke about his belief that some traditional cybersecurity roles will be altered, predicting the “death” of the analyst role, which he believes will transition to something more along the lines of an engineer or “conductor” due to AI integration.

Prof Barry O’Sullivan also spoke about the fears around AI and LLMs, humourously comparing the tech to “the drunk guy at the end of a bar” who will talk to you about whatever you want in however way you want him to, while lacking full cognisance and advanced intelligence.

For O’Sullivan, who is the director of the Insight Centre for Data Analytics, the main concerns around AI should be in relation to regulations and the consequences of malfunctions. He spoke about how the attention should be on the risks to people’s “fundamental rights”, citing concerns around controversial applications like biometric surveillance and how they can be misused.

He added that while some current-day AI systems may seem dauntingly intelligent, at the end of the day they are tools that are trained on data and are not able to “think” in their current state. He also highlighted how these systems currently rely on human-produced data, and referenced how studies have shown that AI systems tend to degrade when trained on their own output.

“[AI is] not smart, just accurate,” he stated. “It is accurate because data is powerful.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Colin Ryan is a copywriter/copyeditor at Silicon Republic

editorial@siliconrepublic.com