If you want a glimpse of the future, check out how developers are using gpt-3.

This natural language processor was trained on parameters ten times greater than its most sophisticated rival and can be used to answer questions and write astoundingly well. Creative professionals everywhere, from top coders to professional writers marvel at what gpt-3 can produce even now – in its relative infancy.

Yesterday, New York Times tech columnist Farhad Manjoo wrote that the short glimpse the general public has taken of gpt-3 “is at once amazing, spooky, humbling, and more than a little terrifying. GPT-3 is capable of generating entirely original, coherent, and sometimes even factual prose. And not just prose — it can write poetry, dialogue, memes, computer code, and who knows what else.” Manjoo speculated on whether a similar but more advanced AI might replace him someday.

On the other hand, a recent Technology Review article describes the AI as “Shockingly good – and completely mindless.” After describing some of the gpt-3 highlights the public has seen so far, it concedes, “For one thing, the AI still makes ridiculous howlers that reveal a total lack of common sense. But even its successes have a lack of depth to them, reading more like cut-and-paste jobs than original compositions.”

Wired noted in a story last week, “GPT-3 was built by directing machine-learning algorithms to study the statistical patterns in almost a trillion words collected from the web and digitized books. The system memorized the forms of countless genres and situations, from C++ tutorials to sports writing. It uses its digest of that immense corpus to respond to a text prompt by generating new text with similar statistical patterns. The results can be technically impressive, and also fun or thought-provoking, as the poems, code, and other experiments attest.” But the article also stated that gpt-3, “often spews contradictions or nonsense, because its statistical word-stringing is not guided by any intent or a coherent understanding of reality.”

Gpt-3 is the latest iteration of language-processing machine learning program from Open AI, an enterprise funded in part by Elon Musk, and its training is orders of magnitude more complex than either its previous offering or the closest competitor. The program is currently in a controlled beta test where whitelisted programmers can make requests and run projects on the AI. According to Technology Review, “For now, OpenAI wants outside developers to help it explore what GPT-3 can do, but it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the AI via the cloud.”

Gpt-3 provides a staggering glimpse of what the future can be.  Simple computer tasks can be built and then confirmed in the AI, so it will know how to create custom buttons on your webpage. Developer Sharif Shameen built a layout generator with gpt-3 so he could simply ask for a button that looks like a watermelon and the AI would give him one.

This outcome shouldn’t surprise everyone as a good natural language processor develops capabilities to translate from natural English to action or to another language, and computer code is little more than an expression of intent in a language that the computer can read. So translating simple English instructions into Python should not be impossible for a sophisticated AI that has read multiple Python drafting manuals.

Of course, some of the coding community is freaking out at the prospect of being replaced by this AI.  “Even legendary coder John Carmack, who pioneered 3D computer graphics in early video games like Doom and is now consulting CTO at Oculus VR, was unnerved: “The recent, almost accidental, discovery that GPT-3 can sort of write code does generate a slight shiver.”

OK, so gpt-3 has been trained on countless coding manuals and instruction sets. But freaketh not – while gpt-3 can sometimes generate usable code, it still has no application of common sense, and therefore non-technical types can’t rely on it to produce machine-readable language that can perform sophisticated tasks.

For any of you who have taken a coding course, you know that coaxing the right things out of a computer requires coders to be literal and precise in ways that are difficult for an AI to approximate.  So a non-coder is likely to be frustrated with AI-generated code at this point in the process. If anything, gpt-3 is a step in the process toward easier coding, requiring a practiced software engineer to develop the right sets of questions for the AI to produce usable code quickly.

I talked about the hype cycle in one of last week's posts, and while gpt-3 is worth the hype as an advance in AI training, where more – the model has 175 billion parameters – is clearly better, but it is only an impressive step in the larger process. OpenAI and its competitors will find useful applications for all of this power and continue to work toward a more general intelligence.

There are many reasons to be wary.  Like others, before it, this AI picks up biases in its training, and it was trained on the internet, so expect some whoppers. Wired observed, “Facebook’s head of AI accused the service of being “unsafe” and tweeted screenshots from a website that generates tweets using GPT-3 that suggested the system associates Jews with a love of money and women with a poor sense of direction.” Gpt-3 has not been trained to avoid offensive assumptions.

But the AI still has the power to astonish and may permit some incredible applications.  It hasn’t even been officially released as a product yet. Watch this space.  As developers, writers, business executives, and artists learn to do more amazing tasks with gpt-3 (and gpt-4 and 5), we will continue to report on it.