In the latest episode of For Tech’s Sake, UCC’s Dr Conor Linehan helps demystify deepfakes and explains how anyone can be fooled by them.
Deepfakes have become a growing concern in recent years and it’s no surprise, given the speed at which deep learning models and AI technology has evolved. These multimedia manipulations that, not so long ago, could have taken days or even weeks to perfect, can now be spit out in a matter of minutes.
According to recently issued guidance from the US National Security Agency, the Federal Bureau of Investigation and the US Cybersecurity and Infrastructure Security Agency, the democratisation of these deepfake tools has made the list of top risks for 2023.
One of the key concerns about the potential of deepfakes is how they can affect our memories and how easily our brains can be fooled by what we see. To explore this area further, researchers at University College Cork carried out a study in which participants were presented with real films and fake film remakes to see if they would falsely remember the existence of the fake remakes.
The study observed an average false memory rate of 49pc, with many participants remembering the fake remake as better than the original film.
In the latest episode of For Tech’s Sake, one of the study’s researchers, Dr Conor Linehan, explained how our memories can be easily manipulated.
“The research on false memories tells us that people’s memory is unreliable anyway,” he said. “A lot of people have this impression that when an event happens to us, that our brain just encodes it like a video recording. That it gets stored in there, ready to be recalled … when actually that’s not the case. Each time you remember something, it kind of affects the memory itself.”
In addition to regularly changing memories as you recall them, Linehan added that a lot of the time, people can only “sort of remember things” and then end up filling in the gaps.
And while almost half of the participants in the UCC study recalled false memories, a more surprising finding was that the deepfake videos were no more effective at creating those false memories than false text descriptions.
The study also found that, while deepfake technology is not uniquely placed to distort movie memories, it did suggest that most participants were uncomfortable with deepfake technology being used to recast movies.
According to the study, “common concerns were disrespecting artistic integrity, disrupting the shared social experience of films and a discomfort at the control and options this technology would afford”.
This has become a particularly timely topic amid the Hollywood strikes that happened over the summer, which cited concerns around the use of AI in future productions.
Last week, a deal was struck to end the writer’s strike, which included regulations stating that AI can’t be used to write or rewrite literary material and AI-generated content cannot be considered source material.
Check out the full episode with Dr Conor Linehan and subscribe for more.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.