OpenUK CEO Amanda Brock said the new government should ‘learn from the recent past’ and not let AI end up being controlled ‘in the hands of a few’.
With the Labour Party securing a landslide victory in the UK general election today (5 July), many are wondering what the change in power spells for the country’s thriving tech sector.
The UK became the first major country to host an AI Safety Summit last November that aimed to promote greater international collaboration in the emerging technology. It resulted in the Bletchley Declaration – an agreement between countries including the US, China, India, Ireland and the UK – to work together on addressing some of the risks associated with the breakneck advancement of AI.
In February, the UK set aside more than £100m to support the creation of nine new research hubs across the country focused on developing responsible AI. Two months prior, Microsoft committed to investing £2.5bn in the UK over the next three years to expand its AI data centre footprint and foster research in the area.
‘An opportunity to arrest the slide’
Now, some prominent voices in tech think it’s time to double down on the progress made so far. Marc Warner, CEO and co-founder of Faculty, an AI firm that has worked extensively with the UK government under Conservative rule, thinks that with Labour now at the helm it is time for the UK to “release the handbrake and fully embrace” the benefits of what he calls “safe, narrow AI”.
“For too long, governments have accepted managed decline in public services – with worsening outcomes eroding trust in institutions. AI offers an opportunity to arrest that slide and to create experiences for citizens akin to those they receive in the private sector,” said Warner.
Founded in 2014, Faculty works to bring AI technology into various sectors, such as defence, life sciences and the public sector. The company gained significant notoriety when it was hired to work with Dominic Cummings on the UK’s Vote Leave campaign and managed to gain a significant number of UK government contracts in a short timespan.
“Remember – AI has been safely and successfully used for decades, from predicting train arrivals or preventing bank fraud. So Starmer must unashamedly embrace narrow AI tools with specific, predetermined goals and proven to be both safe and effective.”
For Amanda Brock, CEO of open-source non-profit OpenUK, said that the new government should “learn from the recent past” and not let AI end up being controlled in the hands of a few.
“To protect the UK’s AI leadership, Labour must look to open AI wherever possible … but it must do this with a considered understanding of what that means to open each component that makes up, from models to data, and what it means to be partially or fully open,” Brock said.
“It’s complex, yes, but we expect our leaders to be able to understand complex tasks and to cut through the distraction of the noise created by those who are able to shout loudest. The biggest risk the UK faces from AI today is that our leaders fail to learn the lessons of the last 20 years of tech and do not enable AI openness.”
In May, the UK released its own safety testing platform to help organisations around the world develop safe AI models. Known as Inspect, the open-source platform is a software library that lets testers such as start-ups, researchers and governments assess the capabilities of AI models and produce scores for various criteria based on the results.
Lack of concrete cybersecurity plan
In the wake of a rapid rise in the complexity of AI, attacks from bad actors exploiting the technology are also expected to become more frequent, according to data protection platform Protegrity, meaning that the new government will have to treat cybersecurity as a priority.
“As AI is a disruptor and presents breakthroughs in the ability to process logic differently, it is attracting attention from businesses and consumers alike, which creates the potential for their data to be put at risk,” the firm said in a statement.
“Meanwhile, the cybercrime industry will be quickly adopting AI technologies, informing more innovative AI-based attacks. As such, through 2024 there may continue to be an increase in AI-based attacks until businesses and government bodies can put in place robust and ethical AI cybersecurity measures. The importance at this time will be in employing safe data practices so private information is always protected.”
Spencer Starkey, vice-president of EMEA at cybersecurity company SonicWall, said that as hacking tactics become “more sophisticated” so too must the UK’s national cybersecurity strategy – but that the lack of “concrete cyber plans” from political parties is a concern.
“Governments hold vast amounts of sensitive data, and a successful cyberattack could have severe consequences, including identity theft, espionage or disruption of essential services and critical infrastructure.” Starkey said.
“Moreover, governments set cybersecurity standards and policies that private sectors often follow, so inadequate regulations could leave both public and private sectors vulnerable. Therefore, emphasising a robust, future-oriented cyber strategy should be a top priority for both the current government and potential successors. This will ensure national security and instil public trust in the digital age.”
Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.
Keir Starmer and his wife Victoria Starmer cast their votes in the UK general election on 4 July 2024. Image: Labour Party/Flickr (CC BY-NC-ND 2.0)