Are algorithms the real online influencers?


28 Jun 2024

Image: © accogliente/Stock.adobe.com

The latest episode of For Tech’s Sake dives deep into the world of algorithms and AI recommender systems to find out how much power they really hold over us.

Most people are familiar with recommendation engines, at the very least, those on streaming services and e-commerce sites that match you with various TV shows or products you might like based on what you’ve looked at before.

The algorithms behind these engines are a true force of power, and while it might be helpful to know that if you liked Downtown Abbey you may also like Bridgerton, they can also have a much stronger influence on our lives without us even realising.

In fact, data scientist Cathy O’Neil dubbed them Weapons of Math Destruction in her book of the same name all the way back in 2016 and they’ve only become more sophisticated since then.

With the evolution of AI and much needed regulation such as the EU AI Act, many discussions are taking place around the world about how AI algorithms have the potential to go wrong if used without a human in the loop.

For example, the Dutch tax authority ruined thousands of lives after using an algorithm to spot suspected benefits fraud for years – penalising families over a mere suspicion of fraud based on the system’s risk indicators. And in the US, predictive policing algorithms have come under constant fire for racial profiling.

With all this in mind, For Tech’s Sake hosts Jenny Darmody and Elaine Burke spoke to Megan Nyhan, a PhD researcher at D-Real where she is working on a framework for designing ethical and trustworthy AI recommender systems.

Nyhan explained the various nuances behind algorithms such as collaborative filtering versus content-based filtering, implicit signals versus explicit signals, supervised versus unsupervised learning, black box versus explainable AI, and how we end up coding in bias and polarisation, creating echo chambers and feedback loops.

“[An echo chamber] only sends you content to reaffirm your beliefs because, in a sense, the more extremist and the more polarised you are in your view, the more predictable you are, because you’re more likely to engage with that content,” she said. “So, the way the recommender systems learn you as a user profiler and learn you as a person is by implicit and explicit feedback.”

Implicit feedback is the subtle signals you give to an algorithm to suggest you like a certain type of content, for example, always pausing your scroll through Instagram to watch a particular content creator. Meanwhile, explicit feedback is the much clearer action you might take to tell the algorithm your preferences, for example, actively following said user.

Check out the latest episode of the season and subscribe to For Tech’s Sake wherever you get your podcasts. You can also become a Headstuff+ Community member to access bonus episodes of the show.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.