The AI system can make multiple versions of an image with different basic descriptions, using a process called diffusion.
Researchers at OpenAI have created a new AI system that can create realistic images and artworks from simple word descriptions.
OpenAI said the system – named DALL-E – is able to understand the relationship between an image and words used to describe it.
“It uses a process called ‘diffusion’, which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognises specific aspects of that image,” the AI company said on its website.
OpenAI created the first version of DALL-E last year. But its newly unveiled second version “generates more realistic and accurate images with four times greater resolution”.
The website shows various examples of what DALL-E 2 is able to create, and how these creations can change through simple adjustments in the text.
Once a description is added, the AI system can create multiple images based on how it interprets the text, combining different concepts, attributes and styles.
One user shared a batch of images that were created by DALL-E on Twitter, including “teddy bears working on new AI research on the moon in the 1980s”.
teddy bears working on new AI research on the moon in the 1980s pic.twitter.com/mNN7xLtkPF
— Critter (@BecomingCritter) April 6, 2022
The AI can take an original image and make realistic edits. Using a natural language caption, it can add or remove specific elements while taking factors such as shadows and textures into account.
It can also take an existing image and create multiple variations that are inspired by the original.
In an OpenAI study, the new version of DALL-E was preferred over the original by almost 89pc in terms of photorealism and around 71pc for caption matching.
Concerns of misuse
While the abilities of DALL-E are impressive, concerns have been raised that this sort of technology could help people spread disinformation online through the use of authentic-looking fake images.
“You could use it for good things, but certainly you could use it for all sorts of other crazy, worrying applications, and that includes deepfakes,” Arizona State University Prof Subbarao Kambhampati told The New York Times.
OpenAI said DALL-E 2 is not currently available to the public as it is currently testing the limitations and capabilities with select users, to “develop and deploy AI responsibly”.
Some of the safety mitigations OpenAI said it has worked on include preventing offensive images from being created by minimising exposure to explicit content in the training data for the system and by adding specific filters to identify text prompts and images that could violate its policies.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.