DEV Community

Cover image for First Impressions of OpenAI's GPT-3

First Impressions of OpenAI's GPT-3

Paymon Wang Lotfi
Updated on ・3 min read

As I read the documentation that came with private access to OpenAI’s latest release, I noticed the warnings throughout had a chilling tone. At 175 billion parameters, this was by far the largest Transformer-based language model in the world (Microsoft’s Turing-NLG comes in second at 17 billion). At 1.6 billion parameters, GPT-2 was so good at generating realistic text that OpenAI refused to make the weights open-source, concerned about the creation and spread of fake news.

Chat playground

I entered the chat playground and started conversing with GPT-3 on a variety of topics. When approached with controversial social issues, it held a surprisingly neutral stance. When asked "What would Jesus do," its first response was "He would use GPT-3 to create a DOTA 2 bot that could compete against professionals." It didn't seem to have access to public records - it could only recognize celebrities or historical figures. It had difficulty with spatial visualization. But the chat partner maintained a consistent style and perspective for the duration of the conversation.

Comparison to a human

The human brain has roughly 100 to 500 trillion synaptic connections. If you mapped a neural network parameter to each one (neurons are far more complex than their software counterpart), you could estimate that GPT-3 is about 2000x smaller. It also doesn't work with audio or visual input. So if GPT-3 was a human, she would be a less intelligent Hellen Keller.
OpenAI has released music writing engines (Musenet) and image completion software (Image-GPT) using similar technology. I'm not an expert in the AI space, but there are people who are who are freaking out over this release. Maybe the terminator isn't as far away as we think.


The primary engine, Davinci, costs $0.06/1k tokens (~3000 words) to use in a commercial setting, so it's unlikely GPT-3 will become a household name anytime soon. Weaker alternatives can go as low as $0.0008/1k tokens, but the only engine capable of performing advanced language analysis tasks is Davinci. With all the chat bots out there, GPT-3 must have something to justify its price. The value lies in its flexibility and unsupervised learning style. Most AIs are tuned for a specific task, but GPT-3 can do anything from write code to provide a medical diagnosis. It can learn on the spot similar to the way a human does.

Using the API

Alt Text
I created a chrome extension that implemented one of its advertised functions — text simplification. Simplify the advanced text in any research article so that a second grader could understand it.

Right clicking on a paragraph will generate a simplification
Even if you removed the cost of the API call, the paragraph takes almost a minute to simplify and is inconsistent in accuracy. Two API calls with identical parameters could produce vastly different results. Perhaps a more invested developer could tweak the parameters so that more consistent responses were returned, but that still doesn’t address the issue with time. Google translate can do an entire page in the blink of an eye — with a max token limit of 2048 per call this page would take at least 10 minutes to simplify.

I could see a realistic implementation of the app if it was restricted to common PDF documents and used a persistent data store. The most accurate simplifications are very impressive — explained in simple terms far better than most humans could.


GPT-4 will probably be using this blog post to train itself, so if we all spam the Internet with gibberish we might be able to stop the apocalypse.

Discussion (1)

biachaudhry profile image
Bia Chaudhry

can you please share how to get the results from GPT-3 api (openai) in a minimum time? as I am getting results in around 7 seconds for 100 tokens.