DEV Community

Cover image for How to let ChatGPT call functions in your app
Simon Johansson for Encore

Posted on • Updated on

How to let ChatGPT call functions in your app

You can now give OpenAI access to your app’s APIs when answering prompts. This means, with just a few lines of code, you can let your users interact with your API using natural language prompts, instead of a complex chain of API calls.

In this article we’ll explain how it works, when it might be relevant to use, and walk through how to build a simple implementation using Encore.

How it works

In addition to sending a text prompt to the OpenAI API (used to interact with the LLM model powering ChatGPT), you can now also send in a list of functions defined in your system that it can use. The AI will automatically decide whether to call one or multiple functions in order to answer the prompt.

It’s worth calling out, the AI doesn't execute the functions itself. Instead the model simply generates parameters that can be used to call your function, which your code can then choose how to handle, likely by calling the indicated function. This is a good thing, as your application is always in full control.

The lifecycle flow of a function call:

Flow diagram of function call lifecycle

When would you use it?

  1. Fetching data from various services: an AI assistant can now answer questions like “what are my recent orders?” by fetching the latest customer data from an internal system.

  2. Taking action: an AI assistant can now take actions on the users behalf like scheduling meetings by accessing a calendar API. Adding Siri-like behaviour to your own apps can now be done in a few lines of code.

  3. Natural language API calls: Convert user queries expressed in natural language into a complex chain of API calls. Let the LLM create the SQL queries needed to fetch data from your own database.

  4. Extract structured data: Creating a pipeline that fetches raw text, then converts it to structured data and saves it in a database is now super easy to assemble. No more need to copy & paste the same type of data into ChatGPT.

Let’s try it

In this simple example, we’re building a conversational assistant which can help users navigate a database of books. To make the assistant helpful, we want it to be able to look up books in our database.

We’ll be using Encore.ts to quickly create the endpoints needed to interact with our app, and the OpenAI API will use these endpoints to answer the user’s prompt.

You can see the full code for the example here: https://github.com/encoredev/examples/tree/main/ts/gpt-functions

1. Install Encore and create your app

To run the example yourself, first install Encore:

  • macOS: brew install encoredev/tap/encore
  • Linux: curl -L https://encore.dev/install.sh | bash
  • Windows: iwr https://encore.dev/install.ps1 | iex

Next, create a new Encore app with the example code using this command:

encore app create gpt-functions-example --example=ts/gpt-functions
Enter fullscreen mode Exit fullscreen mode

You will also need an OpenAI API key and set it by running:

encore secret set --type dev,local,pr,prod OpenAIAPIKey
Enter fullscreen mode Exit fullscreen mode

After that you are ready to run the application:

encore run
Enter fullscreen mode Exit fullscreen mode

2. Create some functions

The starting point for function calling is choosing functions in your own codebase that you’d like to enable the model to generate arguments for.

For this example, we want to allow the model to call our list, get, and search endpoints:

import { api } from "encore.dev/api";

export interface Book {
  id: string;
  name: string;
  genre: Genre;
  description: string;
}

export enum Genre {
  mystery = "mystery",
  nonfiction = "nonfiction",
  memoir = "memoir",
  romance = "romance",
  historical = "historical",
}

// Using a hardcoded database for convenience of this example
const db: Book[] = [
  {
    id: "a1",
    name: "To Kill a Mockingbird",
    genre: Genre.historical,
    description: `Compassionate, dramatic, and deeply moving, "To Kill A Mockingbird" takes readers to the roots of human behavior - to innocence and experience, kindness and cruelty, love and hatred, humor and pathos. Now with over 18 million copies in print and translated into forty languages, this regional story by a young Alabama woman claims universal appeal. Harper Lee always considered her book to be a simple love story. Today it is regarded as a masterpiece of American literature.`,
  },
  ...
];

export const list = api(
  {expose: true, method: 'GET', path: '/book'},
  async ({genre}: { genre: string }): Promise<{ books: { name: string; id: string }[] }> => {
    const books = db.filter((item) => item.genre === genre).map((item) => ({name: item.name, id: item.id}));
    return {books}
  },
);

export const get = api(
  {expose: true, method: 'GET', path: '/book/:id'},
  async ({id}: { id: string }): Promise<{ book: Book }> => {
    const book = db.find((item) => item.id === id)!;
    return {book}
  },
);

export const search = api(
  {expose: true, method: 'GET', path: '/book/search'},
  async ({name}: { name: string }): Promise<{ books: { name: string; id: string }[] }> => {
    const books = db.filter((item) => item.name.includes(name)).map((item) => ({name: item.name, id: item.id}));
    return {books}
  },
);
Enter fullscreen mode Exit fullscreen mode

Worth noting here is that these are endpoints we might have in a regular CRUD service, nothing in our business logic is adapted for an LLM.

3. Describe your function to the model

We can now create a “function definition” that describes the functions to the model. This definition describes both what the functions does (and when it should be called) and what parameters are required to call each function.

You can ensure that the model calls the correct functions by providing clear guidance in the system message, using intuitive function names and detailed descriptions for functions and parameters.

import {api} from "encore.dev/api";
import {book} from "~encore/clients";
import OpenAI from "openai";
import {secret} from "encore.dev/config";

// Getting API from from Encore secrets
const apiKey = secret("OpenAIAPIKey");

const openai = new OpenAI({
  apiKey: apiKey(),
});

// Encore endpoint the receives a text prompt as a query param and returns a message as a response
export const gpt = api(
  {expose: true, method: "GET", path: "/gpt"},
  async ({prompt}: { prompt: string }): Promise<{ message: string | null }> => {

    // Using the runTools method on the Chat Completions API
    const runner = openai.beta.chat.completions
      .runTools({
        model: "gpt-3.5-turbo",
        messages: [
          {
            role: "system",
            content:
              "Please use our book database, which you can access using functions to answer the following questions.",
          },
          {
            role: "user",
            content: prompt,
          },
        ],
        // The tools array contains the functions you are exposing
        tools: [
          {
            type: "function",
            function: {
              name: "list",
              strict: true,
              description:
                "list queries books by genre, and returns a list of names of books",
              // The model will use this information to generate
              // arguments according to your provided schema.  
              parameters: {
                type: "object",
                properties: {
                  genre: {
                    type: "string",
                    enum: [
                      "mystery",
                      "nonfiction",
                      "memoir",
                      "romance",
                      "historical",
                    ],
                  },
                },
                additionalProperties: false,
                required: ["genre"],
              },
              // Calling the list endpoint in our book service 
              function: async (args: { genre: string }) => await book.list(args),
              parse: JSON.parse,
            },
          },

          ...

        ],
      })

    // Get the final message and return the message content
    const result = await runner.finalChatCompletion();
    const message = result.choices[0].message.content;
    return {message};
  },
);
Enter fullscreen mode Exit fullscreen mode

In the above code we define how and when the LLM should call our list endpoint. We specify that the endpoint takes one of five predefined genres as input and that the genre argument is required.

4. Calling the example

Let’s make use of Local Development Dashboard that comes with Encore and try calling our assistant with the prompt Recommend me a book that is similar to Kill a Mockingbird to see what it responds with.

The assistent responds with:

I recommend checking out the following historical books that are similar to "To Kill a Mockingbird":

  1. All the Light We Cannot See
  2. Where the Crawdads Sing

These books share some similarities with "To Kill a Mockingbird" and are worth exploring

This seams like a fitting answer because both those books are also present in our database and share the same genre as “To Kill a Mockingbird”.

Encore generated a trace for us when calling the gpt endpoint. In the trace details we can see exactly which of our functions were called by the LLM to generate the final response:

  1. The LLM extracted “To Kill a Mockingbird” from the prompt and uses that to call the search endpoint.
  2. The search endpoint returns the ID of “To Kill a Mockingbird”.
  3. The LLM uses the ID to call the get endpoint.
  4. The get endpoint returns the description and the genre of the book.
  5. The LLM uses the genre to call the list endpoint.
  6. The list endpoint returns a list of books in the requested genre.
  7. The LLM recommends two books in the same genre as “To Kill a Mockingbird”.

The LLM called all three of our function in order to come up with the recommendation, pretty impressive!

Function calling with Structured Outputs

By default, when you use function calling, the API will offer best-effort matching for your parameters, which means that occasionally the model may miss parameters or get their types wrong when using complicated schemas.

In August 2024, OpenAI launched Structured Outputs. When you turn it on by setting strict: true in your function definition (see above code example). Structured Outputs ensures that the arguments generated by the model for a function call exactly match the JSON Schema you provided in the function definition.

Pitfalls with Function Calling

Some things to look out for when using function calling:

  • The LLM with read through the data that is returned from your functions to determine the next step (return a response or keep calling functions). If your function returns big payloads then you will potentially send a lot of money for each submitted prompt. One solution to this is to limit the data returned from your functions. You can also specify max_tokens allowed for each prompt, the model will then cut off the response if the limit is reached.

  • Contrary to what you might think, sending in a lot functions in a single tool call will reduce the model’s ability to select the correct tool. OpenAI recommends limiting the number of functions to no more than 20 in a single tool call.

Wrapping up

Now you know how to use OpenAIs function calling feature to connect ChatGPT to external tools.

Top comments (19)

Collapse
 
crazycga profile image
James

Impressive. I hadn't thought of doing something like this...

Collapse
 
simonjohansson profile image
Simon Johansson

Cool! I found it really nice that it was so quick and easy to get going with the OpenAI SDK, try it out in your own app.

Collapse
 
jd_2bdf19f7f15397f profile image
Juan Davi

I needed to use:
Encore secret set OpenAIAPIKey --type dev,local,pr

Collapse
 
simonjohansson profile image
Simon Johansson

Thanks Juan, I updated the example 👍

Collapse
 
jd_2bdf19f7f15397f profile image
Juan Davi

but when runing I get:
encore run

❌ Building Encore application graph... Failed: setup deps
Caused by:
0: setup dependencies
1: npm install failed
2: No such file or directory (os error 2)
Stack backtrace:
0: anyhow::backtrace::capture::Backtrace::capture
1: ::setup_deps
2: scoped_tls::ScopedKey::set
3: tsparser_encore::main
4: std::sys_common::backtrace::__rust_begin_short_backtrace
5: _main
⠚ Analyzing service topology...
setup deps

Caused by:
0: setup dependencies
1: npm install failed
2: No such file or directory (os error 2)

Stack backtrace:
0: anyhow::backtrace::capture::Backtrace::capture
1: ::setup_deps
2: scoped_tls::ScopedKey::set
3: tsparser_encore::main
4: std::sys_common::backtrace::__rust_begin_short_backtrace
5: _main

Collapse
 
simonjohansson profile image
Simon Johansson

I think maybe you don’t have npm installed (or not on the PATH)? What happens in you run npm help in your terminal?

Thread Thread
 
jd_2bdf19f7f15397f profile image
Juan Davi

You are right, after trying to install npm, I get Error: You are using macOS 11

Thread Thread
 
jd_2bdf19f7f15397f profile image
Juan Davi

I will try with windows.

Collapse
 
mahesh_g profile image
Mahesh kumar

Great content. Keep going

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg

Thanks for the support Manesh!

Collapse
 
simonjohansson profile image
Simon Johansson

Thanks for the feedback, will do!

Collapse
 
shekharrr profile image
Shekhar Rajput

Good one. Bookmarking for later.

Collapse
 
kin7777 profile image
kince marando

Amazing advice, and well articulated
Kindly thanks

Collapse
 
jatin_patel_30d5db89d55b8 profile image
Jatin Patel

Amazing tutorial would surely integrate this one of my projects

Collapse
 
simonjohansson profile image
Simon Johansson

Thanks Jatin, glad to hear!

Collapse
 
martinbaun profile image
Martin Baun

Do you use this to test ChatGPT plugins?

Collapse
 
simonjohansson profile image
Simon Johansson

Hey Martin, thanks for your question. Do you mean testing your own integration before publishing a ChatGPT plugin? I have not published a ChatGPT plugin myself so I am unsure about the process OpenAI recommends for that. But OpenAI is pushing "function calling" as their recommended way of integrating ChatGPT into external system so to me it would make sense to utilize it when building plugins as well.

Collapse
 
kostaspap profile image
Κώστας Παπ

great!

Collapse
 
simonjohansson profile image
Simon Johansson

Thanks 🙌

Some comments may only be visible to logged-in visitors. Sign in to view all comments.