The company said it has built ‘new detection and response techniques’ to stop misuse when using the faces of real people in the AI image generator.
OpenAI is now letting users upload and edit people’s faces with its advanced text-to-image generator DALL-E 2.
Previously, DALL-E 2 would reject image uploads that contained realistic faces or attempted to imitate public figures, such as celebrities or politicians.
This was done to prevent the system being used to create deepfakes, which are fake images of people or images designed to make it look like a person has done something they have not.
In an email sent to DALL-E users seen by TechCrunch, OpenAI said it has built “new detection and response techniques to stop misuse”. The company added that it has received requests for the ability to upload and edit faces from various testers.
“A reconstructive surgeon told us that he’d been using DALL-E to help his patients visualise results,” OpenAI said in the email. “And filmmakers have told us that they want to be able to edit images of scenes with people to help speed up their creative processes.”
Concerns have been raised about text-to-image models like DALL-E being used to spread disinformation online. When OpenAI revealed the latest version of the AI model earlier this year, it was unavailable to the public while its limitations were tested.
Arizona State University’s Prof Subbarao Kambhampati told The New York Times that the technology could be used for “crazy, worrying applications, and that includes deepfakes”.
The text-to-image generator remains in beta, but the number of users is growing since OpenAI gave more people early access in July. OpenAI said at the end of August that more than 1m people are using DALL-E.
Other text-to-image models have had issues with misuse in recent months. Stability AI’s Stable Diffusion model, which was leaked on 4Chan, was used to generate pornographic material including AI-generated images of nude celebrities, TechCrunch reported.
Deepfakes can also used by cybercriminals to attack and infiltrate organisations. In a VMware security report released last month, two out of three respondents said they saw malicious deepfakes being used as part of cyberattacks.
The rise of text-to-image AI
Regardless of the benefits and risks of this technology, the text-to-image market has grown more crowded this year.
Google Research revealed its own text-to-image generator called Imagen in May. The Google team behind the model said it had an “unprecedented degree of photorealism” and a deep level of language understanding.
Meta entered the text-to-image arena in July, when it revealed its own model called Make-A-Scene. Meta said this system accepts rough sketches from the user to direct the AI before the final image is created.
A publicly accessible text-to-image generator called Dall-E Mini also garnered a lot of attention on the internet earlier this year. Despite the similar name, this model was not created by OpenAI.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.