DEV Community

Cover image for Troubleshoot your OpenAI integration - 101

Troubleshoot your OpenAI integration - 101

Hey everyone!

In this tutorial, I'm going to walk you through how to troubleshoot various scenarios when integrating your backend application with OpenAI's Language Model (LLM) solution.

Important Note:

For this guide, I'll be using Cloud AI services as an example. However, the steps and tips I'll share are applicable to any cloud provider you might be using. So, let's dive in!

Tools to use

For this tutorial, I will use the following tools and Information:

  • Visual Studio Code
  • Postman
  • Azure AI Service
    • Azure OpenAI
      • Endpoint
      • API Key

Visual Studio Code

Visual Studio Code (VS Code) is a powerful and versatile code editor developed by Microsoft. πŸ–₯️ It supports various programming languages and comes equipped with features like debugging, intelligent code completion, and extensions for enhanced functionality. πŸ› οΈ VS Code's lightweight design and customization options make it popular among developers worldwide. 🌍

Postman

Postman is a popular software tool that allows developers to build, test, and modify APIs. It provides a user-friendly interface for sending requests to web servers and viewing responses, making it easier to understand and debug the interactions between client applications and backend APIs. Postman supports various HTTP methods and functionalities, which helps in creating more efficient and effective API solutions.

Postman Installation

Step 1: Download the Postman App
  1. Visit the Postman Website: Open your web browser and go to the Postman website.
  2. Navigate to Downloads: Click on the "Download" option from the main menu, or scroll to the "Downloads" section on the Postman homepage.
  3. Select the Windows Version: Choose the appropriate version for your Windows architecture (32-bit or 64-bit). If you are unsure, 64-bit is the most common for modern computers.
Step 2: Install Postman
  1. Run the Installer: Once the download is complete, open the executable file (Postman-win64-<version>-Setup.exe for 64-bit) to start the installation process.
  2. Follow the Installation Wizard: The installer will guide you through the necessary steps. You can choose the default settings, which are suitable for most users.
  3. Finish Installation: After the installation is complete, Postman will be installed on your machine. You might find a shortcut on your desktop or in your start menu.
Step 3: Launch Postman
  1. Open Postman: Click on the Postman icon from your desktop or search for Postman in your start menu and open it.
  2. Sign In or Create an Account: When you first open Postman, you’ll be prompted to sign in or create a new Postman account. This step is optional but recommended for syncing your data across devices and with the Postman cloud.

Postman

Troubleshooting

Troubleshooting API Integration - Multimodal Model

To start troubleshooting API integration, I will refer to the following common error messages while verifying the integration:

  1. Resource Not Found Error
  2. Timeout Error
  3. Incorrect API key provided Error
Step 0: Collect OpenAI related information

Let's retrieve the following information before starting our troubleshooting:

  • OpenAI Endpoint = https://[endpoint_url]/openai/deployments/[deployment_name]/chat/completions?api-version=[OpenAI_version]
  • OpenAI API Key = API_KEY
  • OpenAI version = [OpenAI_version]
Step 1: Verify Correct Endpoint

Let's review the OpenAI Endpoint we will use:

https://[endpoint_url]/openai/deployments/[deployment_name]/chat/completions?api-version=[OpenAI_version]
Enter fullscreen mode Exit fullscreen mode
URL Breakdown
# 1. Protocol: https
  • Description: This protocol (https) stands for HyperText Transfer Protocol Secure, representing a secure version of HTTP. It uses encryption to protect the communication between the client and server.
# 2. Host: [endpoint_url]
  • Description: This part indicates the domain or endpoint where the service is hosted, serving as the base address for the API server. The [endpoint_url] is a placeholder, replaceable by the actual server domain or IP address.
# 3. Path: /openai/deployments/[deployment_name]/chat/completions
  • Description:
    • /openai: This segment signifies the root directory or base path for the API, related specifically to OpenAI services.
    • /deployments: This indicates that the request targets specific deployment features of the services.
    • /[deployment_name]: A placeholder for the name of the deployment you're interacting with, replaceable with the actual deployment name.
    • /chat/completions: Suggests that the API call is for obtaining text completions within a chat or conversational context.
# 4. Query: ?api-version=[OpenAI_version]
  • Description: This is the query string, beginning with ?, and it includes parameters that affect the request:
    • api-version: Specifies the version of the API in use, with [OpenAI_version] serving as a placeholder for the actual version number, ensuring compatibility with your application.

We will go to "Collections" and go to API tests/POST Functional folder. Then we need to verify the following:

  1. REST API operation must be set to "POST"
  2. Endpoint should have all required values, including Endpoint_URL, Deployment_Name and API-version.
  3. API-key must be added in the "Headers" section

Find the below image for better reference:

Postman Setup1

Step 2: Understand Body Configuration

For this example, I will use the following sample Body data:

{
  "messages": [
    {
      "role": "system",
      "content": "You are a mechanic who loves to help customers and responds in a very friendly manner to a car related questions"
    },
    {
        "role": "user",
        "content": "Please explain the role of the radiators in a car."
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode
Explanation of the messages Array

The messages array in the provided JSON object is structured to facilitate a sequence of interactions within a chat or conversational API environment. Each entry in the array represents a distinct message, defined by its role and content. Here's a detailed breakdown:

Message 1 πŸ› οΈ

  • Role: "system"
    • Description: This role typically signifies the application or service's backend logic. It sets the scenario or context for the conversation, directing how the interaction should proceed.
  • Content: "You are a mechanic who loves to help customers and responds in a very friendly manner to car related questions"
    • Description: The content here acts as a directive or script, informing the recipient of the message about the character they should portray β€” in this case, a friendly and helpful mechanic, expert in automotive issues.

Message 2 πŸ—£οΈ

  • Role: "user"
    • Description: This designates a participant in the dialogue, generally a real human user or an external entity engaging with the system.
  • Content: "Please explain the role of the radiators in a car."
    • Description: This message poses a direct question intended for the character established previously (the mechanic). It seeks detailed information about the function of radiators in cars, initiating a topic-specific discussion within the established role-play scenario.

Each message in the array is crafted to foster an engaging dialogue by defining roles and providing content cues, which guide responses and interaction dynamics. This methodology is widespread in systems designed to simulate realistic conversations or provide role-based interactive experiences.

Find the below image for better reference. Note that I also select format as "raw" and the content type as "JSON":

POSTMAN Setup 2

Step 3: Test OpenAI Endpoint

If you have followed all above steps, you're ready to start testing your OpenAI Endpoint! Refer to the below image for the final steps and a sample result you should see.

Postman_final

Step 4: Test OpenAI Endpoint - VSC

The following Python code replicates above steps. Feel free to use after POSTMAN tests are successful

import requests
import json

# Define the URL of the API endpoint
url = "https://[endpoint_url]/openai/deployments/[deployment_name]/chat/completions?api-version=[OpenAI_version]"

# Define the API token
headers = {
    "api-key": "API_KEY",
    "Content-Type": "application/json"
}

# Define the JSON body of the request
data = {
    "messages": [
        {
            "role": "system",
            "content": "You are a mechanic who loves to help customers and responds in a very friendly manner to car related questions"
        },
        {
            "role": "user",
            "content": "Please explain the role of the radiators in a car."
        }
    ]
}

# Make the POST request to the API
response = requests.post(url, headers=headers, json=data)

# Check if the request was successful
if response.status_code == 200:
    # Print the response content if successful
    print("Response received:")
    print(json.dumps(response.json(), indent=4))
else:
    # Print the error message if the request was not successful
    print("Failed to get response, status code:", response.status_code)
    print("Response:", response.text)

Enter fullscreen mode Exit fullscreen mode

Troubleshooting API Integration - Embedding Model

Under preparation πŸ› οΈπŸ”§πŸš§

Useful Links:

If you are using Azure AI and OpenAI LLM solutions, the following link will help you to understand how API integration is done:

  1. OpenAI models
  2. REST API Reference

Top comments (0)