A Cambridge-based start-up is attempting to develop AI that could ease the pressure on content moderators.
Unitary, a start-up that is developing AI for content moderation, has announced that it raised £1.35m in seed funding. The company’s tech, which is still in development, aims to automatically detect harmful content online, so humans don’t have to.
The news comes as Tim Berners-Lee marked the 31st anniversary of the world wide web by highlighting the harm that many users experience in their day-to-day life online.
Across all of the world’s biggest social media platforms, there are plenty of examples of abusive, violent, graphic and distressing content that needs to be removed, and in many cases, reported to authorities.
Moderating this content can be a difficult job, and has caused workers to open lawsuits against Facebook and CPL, for example. Some moderators have reported experiencing psychological trauma from their work, but at present there is no real alternative to human moderators, which leaves big tech companies arguing that people are still necessary in the process.
Unitary’s solution
Based in Cambridge, Unitary was co-founded by Sasha Haco and James Thewlis. The co-founders have raised Unitary’s seed funding from investors including Jane VC, SGH Capital and a number of angel investors. The funding round was led by Rocket Internet’s GFC.
Unitary had also raised pre-seed funding from Entrepreneur First as an alumnus of the company builder programme.
Unitary CEO Haco, who previously worked with Stephen Hawking during her PhD, recently told TechCrunch: “Every minute, over 500 hours of new video footage are uploaded to the internet, and the volume of disturbing, abusive and violent content that is put online is quite astonishing.
“Currently, the safety of the internet relies on armies of human moderators who have to watch and take down inappropriate material. But humans cannot possibly keep up. Repeated exposure to such disturbing footage is leaving many moderators with PTSD.”
Haco and Thewlis want to use AI to make the internet safer. Their team has developed a proprietary AI technology that uses computer vision and graph-based techniques to recognise harmful content at the point of upload.
‘Context-aware AI’
The start-up argues that existing technologies, such as those offered by AWS and Microsoft Azure and rely on a combination of machine-assisted content moderation APIs and human review to detect unwanted images, can “fail to understand more subtle behaviours or signs, especially on video”.
Haco said that AI can do the job when it comes to shorter videos, but with longer videos, human moderation is often needed. The CEO also pointed out that context can often give content its meaning.
She said: “We are tackling each of these core issues in order to achieve a technology that will, even in the near term, massively cut down on the level of human involvement required and one day achieve a much safer internet.”
As well as participating in Entrepreneur First’s company builder programme, the AI start-up has also participated in the Nvidia Inception Program.