Forget bible, beer and bullets. Facebook believes the solution in the battle against terrorism is people, AI and machine learning.
Facebook has vowed to defeat terrorism using artificial intelligence and human expertise, in order to rid the network of inflammatory words, images and video.
The news came as the social network launched a new Hard Questions blog series on counterterrorism, and an email address for the public to send suggestions: hardquestions@fb.com.
The social network said last night (15 June) that as well as AI, it has put together a force of 150 counterterrorism specialists and 3,000 community operations specialists to weed out groups such as ISIS from the network of groups.
‘We want Facebook to be a hostile place for terrorists’
– MONIKA BICKERT
“There’s no place on Facebook for terrorism,” said Monika Bickert, director of global policy management at Facebook.
“We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. And, in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities.
“Although academic research finds that the radicalisation of members of groups like ISIS and Al Qaeda primarily occurs offline, we know that the internet does play a role, and we don’t want Facebook to be used for any terrorist activity whatsoever.”
Bickert warned that there is no easy technical fix and that it is an enormous challenge to keep people safe on a platform used by 2bn people every month, with more than 80 languages.
But AI will be a key weapon in the fight against terrorism.
“We are currently focusing our most cutting-edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organisations in due course,” Bickert said.
AI at the forefront of Facebook’s war on terrorism
Here are some of the tools Facebook is using to defeat terrorists:
Image matching: Any time someone uploads a terrorist photo or video, Facebook’s systems will look for whether the image matches a known terrorism photo or video. “This means that if we previously removed a propaganda video from ISIS, we can work to prevent other accounts from uploading the same video to our site. In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform.”
Language understanding: Facebook recently started experimenting with AI to understand text that might be advocating for terrorism. The company is currently experimenting with analysing text that it has already removed that praises or supports terrorist organisations such as ISIS or Al Qaeda, and has developed text-based signals to show what content is propaganda. “That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts. The machine learning algorithms work on a feedback loop and get better over time,” explained Bickert.
Removing terrorist clusters: When Facebook identifies pages, groups, posts or profiles as supporting terrorism, it uses algorithms to try to identify related material. It also looks at other signals, for example, if an account is friends with a high number of accounts that have been disabled for terrorism or share the same attributes as a disabled account.
Recidivism: Bickert said that Facebook has also gotten better at detecting new fake accounts created by repeat offenders, which has enabled it to reduce the time period that terrorism recidivist accounts appear on Facebook. “This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods, too,” she said.
Cross-platform collaboration: Facebook is working on systems to take action against terrorist accounts across all platforms in its family of apps, including WhatsApp and Instagram, a move Bickert described as “indispensable” to its efforts.
People also at the core of counterterrorism
On a human rather than machine level, Bickert said that Facebook still needs people in the fight to keep people safe.
“We increasingly use AI to identify and remove terrorist content, but computers are not very good at identifying what constitutes a credible threat that merits escalation to law enforcement. We also have a global team that responds within minutes to emergency requests from law enforcement.”
To that end, she said that Facebook’s community operations teams around the world are growing by 3,000 people over the next year. These teams will work in dozens of languages to review reports and determine context. The difficult work for the reviewers is to be supported by onsite counselling and resiliency training.
Bickert also revealed that more than 150 people in Facebook are exclusively or primarily focused on countering terrorism as their core responsibility. The counterterrorism team – which can speak nearly 30 languages – includes academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers.
Industry cooperation – such as a shared database of hashes or unique digital fingerprints with players such as Microsoft, Twitter and YouTube – will also be key, as well as collaboration with organisations such as the EU Internet Forum, the Global Coalition Against Daesh and the UK Home Office.
On the subject of encryption, Bickert said that the social network cannot read the content of individual encrypted messages and there are many legitimate uses for the technology among journalists, NGO workers and human rights campaigners. She said Facebook does respond to valid law enforcement requests as long as they are consistent with applicable law and Facebook’s own policies.
Counterspeech training by credible speakers and student competitions, such as the P2P Facebook Global Digital Challenge – which has shared 500 anti-hate and extremism campaigns to more than 56m people in 68 countries – are also part of the company’s antiterrorism arsenal.
“We want Facebook to be a hostile place for terrorists. The challenge for online communities is the same as it is for real-world communities: to get better at spotting the early signals before it’s too late,” Bickert said.
“We are absolutely committed to keeping terrorism off our platform, and we’ll continue to share more about this work as it develops in the future.”