Fake calls using AI voice of Biden used to discourage US voters

7 Feb 2024

Joe Biden in 2020. Image: Gage Skidmore/Flickr (CC BY-SA 4.0 DEED)

Following an investigation, authorities identified Texas-based Life Corporation and an individual named Walter Monk as responsible for the calls made last month.

US authorities have been able to identify the culprits behind fraudulent calls that used an AI-generated voice of president Joe Biden urging people not to vote in upcoming elections.

In a statement released yesterday (6 February), the Office of the Attorney General at the New Hampshire Department of Justice said that it has identified the source of the so-called robocalls received by many residents of the US state last month.

Soon after the calls urging residents not to vote in the upcoming New Hampshire presidential primary elections were made, the office immediately launched an investigation in coordination with state and federal partners, including the Anti-Robocall Multistate Litigation Task Force.

The bipartisan task force is made up of 50 state attorneys general and the Federal Communications Commission Enforcement Bureau.

Following the investigation, the office identified Texas-based Life Corporation and an individual named Walter Monk as the culprits behind the calls.

“Ensuring public confidence in the electoral process is vital. AI-generated recordings used to deceive voters have the potential to have devastating effects on the democratic election process,” said attorney general John M Formella.

“The partnership and fast action in this matter sends a clear message that law enforcement, regulatory agencies and industry are staying vigilant and are working closely together to monitor and investigate any signs of AI being used maliciously to threaten our democratic process.”

US elections are always a sensitive time for tech regulators and law enforcement agencies because of the threat of using social media channels and emerging technologies to influence the outcome. A notable instance of disruption enabled by tech was the US Capitol riots in January 2021.

In October last year, Biden signed an executive order requiring private companies developing AI to report to the federal government about the many risks that their systems could pose, such as aiding other countries and terrorist groups in the creation of weapons of mass destruction.

“One thing is clear: to realise the promise of AI and avoid the risks, we need to govern this technology,” Biden said at the time. “There’s no other way around it, in my view. It must be governed.”

Last month, OpenAI – the company currently at the helm of innovation in generative artificial intelligence – announced some new policies including measures directed at preventing abuse of generative AI chatbots and image creators such as ChatGPT and Dall-E.

“We’re still working to understand how effective our tools might be for personalised persuasion. Until we know more, we don’t allow people to build applications for political campaigning and lobbying,” the company wrote at the time.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Joe Biden in 2020. Image: Gage Skidmore/Flickr (CC BY-SA 4.0 DEED)

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com