US FTC is cracking down on deceptive AI schemes

26 Sep 2024

Image: © Sammby/Stock.adobe.com

The agency has announced several law enforcement actions against operations that use AI hype or sell AI tech that can be used in deceptive ways.

The US Federal Trade Commission (FTC) is targeting companies that have used AI to boost deceptive or unfair conduct that harms consumers.

As part of what it calls Operation AI Comply, the agency has announced five specific cases that expose AI-related deception, one of which is against UK-based DoNotPay, an online legal service which claims to offer a ‘robot lawyer’ service.

Other cases are against companies claiming AI can help consumers make money through online storefronts.

“Using AI tools to trick, mislead or defraud people is illegal,” said FTC chair Lina M Khan.

“The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”

In the case against DoNotPay, the FTC said the company’s service “didn’t live up to the hype” and has ordered the company to stop misleading people and pay $193,000.

Ascend Ecom is a group of companies that, according to the FTC, used deceptive earnings claims to convince people to invest in ‘risk-free’ business opportunities supposedly powered by AI but failed to honour money-back guarantees when things went sour.

Another two companies, Ecommerce Empire Builders and FBA Machine, also promise that customers can earn money by investing in online stores or business opportunities powered by AI. The FTC also claimed that Ecommerce Empire Builders makes clients sign contracts to prevent them from writing negative reviews.

The fifth case involves Rytr, a US-based company that sells an AI writing tool for generating online reviews. The FTC’s complaint states that it is used to generate thousands of false reviews. Rytr has agreed to a proposed settlement prohibiting the company – or anyone working with it – from advertising or selling any service promoted for generating reviews.

The crackdown comes as AI deception continues to infiltrate much of the online world. A study published in July 2023 showed that these advanced systems can trick people into believing false information better than humans can.

As the US election draws near, AI deepfakes have also appeared across social media platforms and can prove to be very effective in spreading misinformation.

These capabilities for scamming combined with the AI hype that is currently happening, means companies see the opportunities to use AI to supercharge their offering, even if it’s no more than ‘AI washing’.

The FTC’s latest cases are just part of the agency’s work to combat AI-related issues in the marketplace from every angle. Last month, it banned the creation or purchase of fake reviews, including false AI-generated reviews.

“We’re checking to see whether products or services actually use AI as advertised, if so, whether they work as marketers say they will,” it said in a statement.

“We’re examining whether AI and other automated tools are being used for fraud, deception, unfair manipulation or other harmful purposes.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Jenny Darmody is the editor of Silicon Republic

editorial@siliconrepublic.com