Australia alters CSAM detection rules after tech firms push back

5 days ago

Image: © MohamadFaizal/Stock.adobe.com

These rules aim to tackle CSAM content online, but changes have been made after critics said there were no safeguards to keep encryption protected.

Australia’s independent online safety regulator has amended upcoming online safety rules to keep encryption protected, after the original draft faced criticism from tech companies.

The rules aim to make online services do more to tackle child sexual abuse material (CSAM) and pro-terror content on their platforms. The services these rules will apply to will include apps, websites, online storage services and some services that “deploy or distribute generative AI models”.

Draft versions of two upcoming “industry standards” were released in November 2023 and relevant stakeholders were invited to comment. But this led to various criticisms from Big Tech firms, who were concerned that the rules would impact end-to-end encryption.

These companies claimed the initial rules would give Australia’s online safety authorities the power to force companies to compromise on encryption to comply with the rules. Encrypted email service Proton went so far as to threaten legal action if the rules went ahead.

An open letter – signed by hundreds of individuals and organisations last December – urged Australia’s eSafety commissioner Julie Inman Grant to adjust the rules to protect end-to-end encryption. This letter claimed the draft rules would force service providers to “undermine the security and privacy of their services in order to comply”.

“Contrary to the goals of the standards, this will leave everyone less safe online,” the open letter read. “Proceeding with the standards as drafted would signal to other countries that online safety is somehow counterposed to privacy and security, when the opposite is true.”

The open letter also criticised the potential for certain technology to be used to detect CSAM content – such as scanning technology, which the letter described as “deeply flawed” technology. The signatories also raised concerns that this form of scanning could be expanded to scan other forms of content in the future.

More clarity for encryption services

In response, Australia’s eSafety commissioner said the new rules will give “greater clarity” to operators of end-to-end encrypted services, as the standards state that service providers are not required to “break or weaken encryption”.

“eSafety also recognises these standards will apply to broad industry categories covering a range of services, and that they will require differing approaches to detecting and removing illegal content such as CSAM,” the regulator said. “To that end, no specific technologies or methods are prescribed in the standards.”

The Australian standards will go into effect six months after they are registered and after a 15-day disallowance period in the country’s parliament.

Protecting encryption

There has been a growing discussion around the world when it comes to end-to-end encryption – governments want new rules to scan for CSAM to stop its spread online, while opponents of these measures have raised privacy and cybersecurity concerns.

Yesterday (20 June), the EU decided to delay a vote on a controversial draft law that could require messaging apps such as WhatsApp and Signal to compromise encryption to scan for CSAM. Various EU member states were expected to abstain or oppose the law in that vote.

Privacy concerns have also been raised about the UK’s Online Safety Bill, which entered into law last October. While supporters of the bill claimed it will bring in a new era of internet safety, critics raised concerns against parts of the bill that could compromise end-to-end encryption.

Last November, the European Commission was accused of “maladministration” for not sharing a list of experts that helped draft the proposed regulation around detecting CSAM.

The Irish Council for Civil Liberties claimed at the time that “numerous experts” have warned the proposal is not technically feasible, as the relevant technology won’t be mature enough in the next two to five years.

In February, the European Court of Human Rights ruled that instances of law enforcement requiring companies to create “backdoors” to the privacy-focused technology violate human rights.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com