Is the Pixel 9 AI-powered image editor too realistic?

23 Aug 2024

Image: Google

The images being shared from those using the Pixel 9 ‘Reimagine’ feature are very impressive, but will people be able to tell if a real image has been manipulated by AI?

AI image generators have been growing in popularity and power over the years, but an upgraded tool from Google is showing just how powerful – and potentially dangerous – these features have become.

Those with early access to the Pixel 9 smartphone have been showcasing the power and realism of the latest version of Magic Editor, a feature that lets users edit photos with the power of AI.

This isn’t exactly a new concept – AI image editors are available online and powerful text-to-image generators have been showing their capabilities for years.

The Magic Editor is a combination of both of these tools and is being praised for its ability to quickly create realistic edits to existing images. Users can simply tap a section of the image and then then type in what they want added – the Magic Editor does the rest.

This is thanks to the “Reimagine” feature for the Pixel 9, which uses the power of Google’s flagship AI model Gemini to accurately interpret user requests.

The examples being shared online are very impressive. Adrian Weckler of the Irish Independent shared various examples of AI-created additions being easily added into real images.

But the power of this feature is also a concern, as it shows just how difficult it is becoming to recognise an AI-generated – or AI-enhanced – image compared to a real one. These types of tools are already being used to create deepfakes – realistic images of people or events that are actually fake.

A lack of guardrails?

The power of this Reimagine feature becomes even more of an issue with some of the more controversial examples people have shared. This tool isn’t just able to add positive additions to an image, it can also be used to add negative, disturbing additions too.

An example from The Verge shows a regular image of a street being ‘enhanced’ with the wreckage of a bike and a car, complete with appropriate lighting and shadows to blend into the image better.

This report said it was easy to add car wrecks, smoke bombs and drug paraphernalia to various images. This presents serious risks for the future. For example, an image of someone’s room can be adjusted to add contraband, or a street can be shown to appear more dangerous than it actually is. There are various ways this type of tool can be abused.

Google isn’t the only company dealing with the risk of people abusing their AI-powered image tools. Grok’s recently unveiled text-to-image generator was shown to have few guardrails on launch, letting users post everything from cartoon characters holding assault rifles to US presidential candidates committing acts of terrorism.

Imagen 3, Google’s latest text-to-image generator, has also been shown to have guardrail issues too, letting users generate images that resemble copyright-protected characters such as Sonic the Hedgehog, Mario and Mickey Mouse.

A Google spokesperson told The Verge that it has clear policies and terms of service on what kinds of content is allowed and that it has guardrails to “prevent abuse”.

“At times, some prompts can challenge these tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place,” the spokesperson said.

Time will tell if Google manages to improve its guardrails before the Pixel 9 smartphone becomes generally available. But the rapid evolution of these AI systems suggests this problem will keep on growing as the technology becomes more advanced, and the influence of AI may become harder to spot in images.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com