What is Lensa AI and why is it angering artists?

13 Dec 2022

Image: © Djavan Rodriguez/Stock.adobe.com

Lensa AI has gone viral for its ability to create ‘magic avatars’ of users, but artists have raised ethical and copyright concerns.

A new app creating artistic renditions of people has gained a lot of traction online in recent weeks, shooting up the rankings in app stores.

Lensa AI is a photo and video editor that recently launched a ‘magic avatar’ feature. Powered by AI, this generates stylised portraits of users who submit images of themselves. The portraits can be styled in various ways, such as a sci-fi or fantasy rendering.

The app recently became one of the most downloaded free apps on both Apple’s App Store and the Google Play Store – although users have to pay to use the AI artwork feature.

While it may be taking social media by storm, using AI to generate images has raised red flags among artists, with some claims that these models steal content to develop images.

How does Lensa AI work?

The software behind these magic avatars stems from Stable Diffusion, one of the many text-to-image generators that have grown in popularity this year.

These AI models are trained on massive datasets and can combine different concepts, attributes and styles to create a new image, usually based on data inputted by a user.

While built on the Stable Diffusion model, which is free to use, there is a payment required to use the magic selfie feature on the Lensa AI app.

For this to work, the user doesn’t submit text, but instead sends roughly 10 to 20 selfies along with a payment of $7.99, or $3.99 if you sign up for a trial. The user then receives a batch of unique avatars created by the AI.

Figures by app analytics firm SensorTower suggest that Lensa AI has also become one of the top grossing apps in the US.

What has gotten artists mad?

While the growth of AI-generated images has been praised by some online, artists are raising complaints that the software impacts their work, with some claims that art is being stolen by these systems.

Australian artist Kim Leutwyler told The Guardian that Lensa AI was replicating distinct styles that mimic the work of other artists. The company behind the app, Prisma Labs, has denied this allegation.

In a Twitter thread last week, Prisma said said that “AI won’t replace artists but can become a great assisting tool”.

Multiple artists have previously accused Stable Diffusion of taking their work to train the AI model. Earlier this year, an analysis on some of the data used to train the text-to-image generator suggested that some of the training images may be copyright protected.

Of the 12m images analysed, around 47pc were sourced from only 100 domains, with the largest number of images (around 8.5pc) coming from Pinterest. The analysis also found that images from famous artists were included, along with images of celebrities and political figures.

Is it safe to upload your photos?

Many people have been submitting batches of their own photos to Lensa AI in order to get a generated image. But a recent analysis by Wired warned users to consider the potential privacy issues before they share their data with the app.

The app says in its privacy policy that it deletes “all metadata that may be associated with your photos and videos” before they are stored in the app’s systems.

Like many apps, Lensa AI also collects certain data from its users about their online activities “and across third-party websites or other online services”. Users are able to opt out of this tracking on iOS devices by emailing the company.

“The information we collect automatically under this privacy policy may include your personal data,” the policy says. “It helps us to improve Lensa and to deliver better service, including but not limited to enabling us to estimate our audience size and usage patterns and recognise when you use Lensa.”

There have also been warnings that the AI can produce sexual or offensive content without users’ consent. MIT Technology Review reporter Melissa Heikkiläa said the AI generator sent her back various images where she was topless or in “overtly sexualised poses”.

She hinted at potential biases within the AI generator, as her colleagues of different genders and races did not receive as many sexual images.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com