OpenAI’s latest AI text generator GPT-3 amazes early adopters

Artificial intelligence research outfit OpenAI Inc. recently made the latest version of its GPT-3 general-purpose natural language processing model available in private beta, and its capabilities are astounding early testers.

GPT-3 is the third generation of OpenAI’s Generative Pretrained Transformer, which is general-purpose language algorithm that uses machine learning to translate text, answer questions and predictively write text. It works by analyzing a sequence of words, text or other data, then expanding on these examples to produce entirely original output in the form of an article or an image.

The algorithm’s predecessor, GPT-2, had already proved to be somewhat controversial because of its ability to create extremely realistic and coherent “fake news” articles based on something as simple as an opening sentence. The potential for misuse was such that OpenAI declined to make the algorithm publicly available. Now, with the release of GPT-3, the algorithm has become exponentially more powerful.

After originally publishing its GPT-3 research in May, OpenAI gave select members of the public access to the model last week via an API. And over the past few days, a number of samples of text generated by GPT-3 have begun circulating widely on social media.

One of the most interesting examples comes from Founders Fund Principal Delian Asparouhov, formerly a partner at Khosla Ventures, who fed the GPT-3 algorithm half of an investment memo he had written and posted on his company website.

Asparouhov then went ahead and gave GPT-3 half of an essay on how to run effective board meetings:

In both examples, GPT-3 was able to generate not just coherent, additional paragraphs of text, but also could follow the prior formatting in such a way as to make it almost indistinguishable from the original, human written text.

GPT-3 is so good at what it does that it can deceive people on almost topic it’s given, even if that topic happens to be writing about itself. Take the example of Zeppelin Solutions GmbH Chief Technology Officer Manuel Araoz, who used GPT-3 to create a complex article about a faux experiment on the popular Bitcointalk forum using a basic prompt as a guideline.

The article, “OpenAI’s GPT-3 may be the biggest thing since bitcoin,” describes how GPT-3 deceived Bitcointalk forum members into believing its comments were genuine. At several points in the text, GPT-3 also describes several possible use cases for language prediction models, noting that they could be used for “mock news, ‘researched journalism,’ advertising, politics and propaganda.”

The text was pretty much perfect, the only flaws a missing table and several omitted screenshots that were referenced within it. Araoz said the text was generated using just a title, a handful of tags and this short summary:

“I share my early experiments with OpenAI’s new language prediction model (GPT-3) beta. I explain why I think GPT-3 has disruptive potential comparable to that of blockchain technology.”

Araoz put GPT-3 to the test in several other ways, using it to make complex texts more understandable, to write poetry in the style of Borges in the Spanish language and write music in ABC notation.

Another tester, Debuild.co founder Sharif Shameem, used GPT-3 to write JSX code from a basic description of a website layout:

GPT-3 appears to blow away the capabilities of its predecessor, thanks in part to the more than 175 billion learning parameters it possesses, which enable it to perform pretty much any task it’s assigned. That makes it an order of magnitude larger than the second-most powerful language model, Microsoft Corp.’s Turing-NLG algorithm, which has just 17 billion parameters.

OpenAI is providing access to the GPT-3 API by invitation only, and there is a long waiting list for the paid version, which should be released in about two months.