DEV Community

Radin
Radin

Posted on

Using the Open AI GPT-3 API for beginners

I've been using Open AI's api for a few months now, here are my thoughts

Introduction to parameters

Image description

Here we see the most basic use case of the API. We first import our libraries, including OpenAI's own library that can be installed with pip. We then define our API key.
Defining our prompt: Notice when I do call in the prompt when I make my request, I add in a prompt + "?". After some experimenting, I found that adding a question mark does indeed have some effect, however, I wouldn't worry too much about that since there are more important parameters.
Top_p: Much more important than the question mark at the end of our prompt, we have top_p. This controls the diversity of the AI's answer via nucleus sampling and it's values to be defined range from (0-1). It provides better control for applications in which GPT-3 is expected to generate text with accuracy and correctness. More information about that here. More simplified, top_p is used to control possible outcomes that the AI searches for.
Temperature: Like top_p, temperature has a great effect on the outcome of the AI's response. Temperature controls randomness, in which values range from (0-1). As the value ranges closer to 0, the answers of the AI will become more repetitive. Making the temperature 0 can be the equivalent of searching something up on google, and getting the same answer over and over again. Making the temperature 1 will make it much more creative. Either way, I don't recommend defining your value at the min/max, but rather something more stable.
N: This decides how many different generations the AI makes. It is defaulted at 1, but can be changed with this parameter. Notice that this can quickly use your tokens.
Model: OpenAI provides many different models, each having their own PROS and CONS. DaVinci proves to be the most reliable, but not necessarily the one you need.Here is more information on the different models from OpenAI's official documentation. Based on the cost, you might want to use curie, as it is faster while also being cheaper than DaVinci.
Of course, there are more parameters you can make to limit or diversify your AI's responses.

Printing the response

Image description
Here, we see instead of slicing the API in an f-string, I just plainly printed out the response. Depending on your parameters it might look a bit different(especially if you changed your token limit),but in the end, it will have the same relative tree/json format. First we have to store our AI's response as a variable. We can do this by slicing from the API with output=response["choices"][0]["text"]. In my case, simply calling {response["choices"][0]["text"]} in an f-string will be easier when printing multiple information. As seen, additional information like the reason the API stopped and problems log can be called.

Compared to ChatGPT Chatbot

This can be found here
When I first started out with the API, the chatbot wasn't even a thing, but now that it is, how does it compare?
First off, I find the ability to decide the top_p and temperature a major advantage compared to the chatbot. Especially when using it for educational purposes or explanation prompts.
Second, your creativity is the limit! You can incorporate this API into your own apps, and potentially be the next big thing! This API can also be easily used with Javascript's Node Package Manager for easy integration with applications.

In the end, you'll be a fool to underestimate Chat GPT-3, but a bigger one to completely rely on it.

Top comments (0)