DEV Community

Cover image for Python + ChatGPT API Development | Based on gpt-3.5-turbo model
openHacking
openHacking

Posted on

Python + ChatGPT API Development | Based on gpt-3.5-turbo model

Background

ChatGPT is a cloud-based artificial intelligence chatbot that uses OpenAI's GPT-3.5-turbo model to implement natural language processing (NLP) and language generation tasks. The new GPT-3.5-turbo model is an upgrade and improvement based on the GPT-3 model, with higher accuracy and expressiveness, it can automatically identify the semantics and context of text, and generate more accurate and Natural reply.

The OpenAI team released the latest intelligent chatbot ChatGPT API and voice-to-text Whisper API. ChatGPT API provides a convenient and easy-to-use way for developers to use this powerful NLP model easily. The latest release of the ChatGPT API uses the model family gpt-3.5-turbo which is the same model used in the ChatGPT product.

chatopenai.pro

ChatGPT API

Introduction

I wrote an article about Develop an Intelligent Chat Program Using Python and ChatGPT API before, introducing the use of text-davinci-002 (or text-davinci-003) model as an example of Python development, this time ChatGPT officially updated the gpt-3.5-turbo model, there will be some adjustments on the interface, providing more API parameters And options, so that developers can customize and optimize the performance of the API according to their own needs.

If the new model is used, the price is $0.002 per 1000 tokens, which is 10 times cheaper than the current GPT-3.5 model. It has high availability and scalability, and can also handle high-traffic application scenarios. The official test engineer migrated from text-davinci-003 to gpt-3.5-turbo, only need a few adjustments to complete the development, and provides very friendly development documents and sample codes to help developers get started quickly.

Usage

If you just want to quickly verify that the ChatGPT API is available, you can try using curl to send the request.

curl https://api.openai.com/v1/chat/completions
   -H "Authorization: Bearer $OPENAI_API_KEY"
   -H "Content-Type: application/json"
   -d'{
   "model": "gpt-3.5-turbo",
   "messages": [{"role": "user", "content": "What is the OpenAI mission?"}]
}'
Enter fullscreen mode Exit fullscreen mode

If you need to go deep into the development of ChatGPT API, I will show you how to use Python + ChatGPT API to quickly build software based on ChatGPT.

Here is a code sample for basic usage:

# Pay attention to check that your OpenAI version must be v0.27.0 or above
import openai

# First, you need to set up your API key
openai.api_key = "YOUR_API_KEY"

# Then, you can call the "gpt-3.5-turbo" model
model_engine = "gpt-3.5-turbo"

# set your input text
input_text = "Where is the 2014 World Cup held?"

# Send an API request and get a response, note that the interface and parameters have changed compared to the old model
response = openai.ChatCompletion.create(
   model=model_engine,
   messages=[{"role": "user", "content": input_text }]
)

# response will get a json message with a structure like this
# {
#  'id': 'chatcmpl-6p9XYPYSTTRi0xEviKjjilqrWU2Ve',
#  'object': 'chat.completion',
#  'created': 1677649420,
#  'model': 'gpt-3.5-turbo',
#  'usage': {'prompt_tokens': 56, 'completion_tokens': 31, 'total_tokens': 87},
#  'choices': [
#    {
#     'message': {
#       'role': 'assistant',
#       'content': 'The 2014 FIFA World Cup was held in Brazil.'},
#     'finish_reason': 'stop',
#     'index': 0
#    }
#   ]
# }

# Parse the response and output the result
output_text = response['choices'][0]['message']['content']
print("ChatGPT API reply:", output_text)
Enter fullscreen mode Exit fullscreen mode

In the code above, we first set up our API key, then specified the GPT-3.5-turbo model to use, set the input text and sent the API request, and finally parsed the response and output the ChatGPT reply . There are a few caveats

New interface: Originally used text-davinci-002 model, we used openai.Completion.create interface, now the new gpt-3.5-turbo should use openai.ChatCompletion.create interface.

New parameters: The first parameter sets the model model name, the second parameter is a conversation list

Why provide a conversation list? Because the API call is a single interface request, the previous chat information will not be automatically recorded, and there is no context. To let ChatGPT understand your context in a single request, you need to provide such a complete list of conversations, such as this one dialogue

import openai

openai.ChatCompletion.create(
   model="gpt-3.5-turbo",
   messages=[
         {"role": "system", "content": "You are a helpful assistant."},
         {"role": "user", "content": "Who won the world series in 2020?"},
         {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
         {"role": "user", "content": "Where was it played?"}
     ]
)
Enter fullscreen mode Exit fullscreen mode

Each dialog message needs to provide role and content. There are three roles: system, user or assistant.

  • system: The system message is equivalent to an administrator, who can set the behavior and characteristics of the assistant. In the example above, the assistant is indicated You are a helpful assistant.
  • user: The user message is ourselves, which can be asked by the user, or directly let the developer build some prompts in advance. Some reference ChatGPT Prompts
  • assistant: The assistant message is the reply provided by the ChatGPT API before, and it is stored here. You can also modify this reply or make up a dialogue yourself to make the whole dialogue more smooth.

If you don't need a dialog, just provide a single user message, as demonstrated in the Python code before.

For more development documents, please refer to Chat completions

Whisper API

OpenAI also released the Whisper API that supports speech-to-text. It offers a large-v2 model, which is easy to use, priced at $0.006/minute, can be accessed on demand, and a highly optimized service stack can ensure faster performance.

Whisper API is available through transcriptions (transcribes in source language) or translations (transcribes into English) endpoints, and accepts a variety of formats (m4a, mp3, mp4, mpeg, mpga, wav, webm):

A simple curl example

curl https://api.openai.com/v1/audio/transcriptions \
   -H "Authorization: Bearer $OPENAI_API_KEY" \
   -H "Content-Type: multipart/form-data" \
   -F model="whisper-1" \
   -F file="@/path/to/file/openai.mp3"
Enter fullscreen mode Exit fullscreen mode

Interface response format

{
   "text": "Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100, a 1,000 times bigger..."
}
Enter fullscreen mode Exit fullscreen mode

Use in Python

import openai

file = open("/path/to/file/openai.mp3", "rb")
transcription = openai.Audio.transcribe("whisper-1", f)

print(transcription)
Enter fullscreen mode Exit fullscreen mode

For more development documents, please refer to Speech to text

Conclusion

The above is just a simple example showing how to use Python to send requests with ChatGPT API, you can use ChatGPT API to complete more complex tasks according to your needs.

Reference

Top comments (0)