The UK Home Office collaborates with a start-up to create software for detecting extremist content.
The government in the UK has publicised its plans to roll out a tool that will be used in the detection of jihadist and extremist content, blocking it from being viewed.
According to the BBC, UK home secretary Amber Rudd said the government may end up legally forcing tech companies to use the tool. “We’re not going to rule out taking legislative action if we need to do it,” Rudd warned.
Publicly funded AI tool
£600,000 worth of public funding was used to develop the tool in partnership with a London start-up, ASI Data Science. The UK government claims the tool is able to detect 94pc of ISIS-related propaganda with 99.9pc accuracy.
The tool is intended for use by small companies, which are often heavily burdened with moderation tasks that they may not have the resources to cope with, helping them to remove undesirable content effectively and in good time.
Rudd told the BBC the tool demonstrated that the government’s request for a stricter examination of extremist content was not an unreasonable one. She said: “The technology is there. There are tools out there that can do exactly what we’re asking for. For smaller companies, this could be ideal.”
In a prepared statement, Rudd said: “We know that automatic technology like this can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exploited to these horrific images.
“This government has been taking the lead worldwide in making sure that vile terrorist content is stamped out.”
Dr Marc Warner from ASI Data Science spoke to BuzzFeed News about the project, saying it is an AI algorithm, which works by “spotting subtle patterns in the extremist videos that distinguish them from normal content, from the rest of the internet”.
Those behind the new tool did say that the majority of the larger tech firms already have teams in place working on similar strategies, and the Home Office said it would be targeting small platforms at first. The algorithm is not yet complete, so it will be a while yet before we see any companies choose to avail of its services in the fight against terrorism and extremist content.
Fighting extremist content
The UK government and other European administrations have been stark in their warnings to tech firms about the dissemination of terrorist materials on their platforms. In September 2017, European political leaders said heavy fines would be in order for companies that didn’t remove extremist content fast enough.
Last December, EU security commissioner Julian King said that there had been improvements in the area but more work needs to be done by industry players to combat the tide of extremist propaganda.