OpenAI opens developer access to GPT-3 text generator

19 Nov 2021

Image: © piter2121/Stock.adobe.com

GPT-3, a powerful AI model that generates human-like text, has many applications – and potential misuses.

Developers across the world now have access to the one of the world’s most sophisticated text-generating models, GPT-3, created by San Francisco-based AI start-up OpenAI.

GPT-3 is a natural language processing (NLP) program that can generate human-like text in response to prompts. Fed on more than half a trillion words from the internet with 175bn parameters, the AI model has many applications ranging from customer support to copywriting.

After months of testing in private beta mode with developers on a waitlist, OpenAI announced yesterday (18 November) that it was getting rid of its waitlist and giving developers in supported countries, including Ireland, access to the GPT-3 API to experiment with new applications.

“Tens of thousands of developers are already taking advantage of powerful AI models through our platform,” OpenAI wrote in a blog. “By opening access to these models via an easy-to-use API, more developers will find creative ways to apply AI to a large number of useful applications.”

OpenAI said it was able to make GPT-3 public after adding safety features to mitigate abuse. It said the API can’t be used to generate hate speech and content will be monitored to make sure it adheres to other guidelines.

“To ensure API-backed applications are built responsibly, we provide tools and help developers use best practices so they can bring their applications to production quickly and safely,” OpenAI wrote, adding that it is also continuously updating its usage and content guidelines.

Last month, Microsoft and Nvidia created what they claim is “the largest and the most powerful monolithic transformer language model trained to date”. The Megatron-Turing Natural Language Generation (MT-NLG) model has 530bn parameters – three times as many as GPT-3.

In September 2020, GPT-3 made headlines, literally, after it wrote an op-ed for the Guardian on “why humans have nothing to fear from AI”, which was based on eight different essays it generated after being fed prompts written by the Guardian.

“Editing GPT-3’s op-ed was no different to editing a human op-ed,” the Guardian wrote in its editor’s note. “We cut lines and paragraphs and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Vish Gain was a journalist with Silicon Republic

editorial@siliconrepublic.com