DEV Community

Cover image for AI/LLM Recipe Generator with chatGPT
Simon Goldin for Digital Canvas Development

Posted on • Edited on

AI/LLM Recipe Generator with chatGPT

The ChatGPT API is like a magic spell for your web application - with just a few lines of code, it can conjure up engaging, intelligent conversations. Even for a tech novice, it's a breeze to weave into new or existing apps. Dive in, and in no time, you'll have a conversational AI that keeps users captivated and coming back for more.


That's the introduction that chatGPT came up with. Pretty good, right?

In this article I won't be building a conversational AI tool, but I will go into the integration between a Remix application and the chatGPT API.

The "test bed" will be a simple recipe generator that gets some information from the user that it will use to create a prompt for chatGPT.

The code is available on github and ultimately looks like this:

screenshot of what this app might look like


Pre-setup

The "pre-setup" is as straightforward as can be. (After setting up payment) you will need to create a secret API key here and copy it to your .env file (make sure it's in your .gitignore file so no one can find it on github!). Also copy your organization ID from here.



OPENAI_API_KEY=[your secret API key]
OPENAI_ORG_KEY=[your organization id]


Enter fullscreen mode Exit fullscreen mode

And install the openai library. If using npm, that would be:



npm i openai


Enter fullscreen mode Exit fullscreen mode

This includes TypeScript types, too!

Calling the API

For this example, we can use the Chat Completions API, but before we can do that, we'll need to configure the library to use our keys. Since this code is exclusively run on a server, and not a user's browser, we can get what we need from the process.env object:



const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
  organization: process.env.OPENAI_API_ORG,
});

const openai = new OpenAIApi(configuration);


Enter fullscreen mode Exit fullscreen mode

Now we can create an object of type CreateChatCompletionRequest. This object can have a variety of different options to tell chatGPT what we want, but the most important (and required) options are model - which model version to use (full list here), and messages - the context and prompt of the chat we want completed.

One messages option can be a system-wide "personality" that the chat can conform to. There are also options to give the API examples of outputs that can be used to train the model for this specific chat.

We'll be using a user message, i.e. the input received from the user, to ask the GTP model what we want.



// for the purpose of this article, we'll abstract this away.
const ingredientsList = getIngredientsList();

const completionRequest: CreateChatCompletionRequest = {
  model: 'gpt-3.5-turbo',
  messages: [
    {
      role: 'system',
      content: 'You are a creative and experienced chef assistant.',
    },
    {
      role: 'user',
      content: `Generate a recipe with these ingredients: ${ingredientsList}.`,
    },
  ],
};

const chatCompletion = await openai.createChatCompletion(completionRequest);


Enter fullscreen mode Exit fullscreen mode

In this case, we're using the gpt-3.5-turbo model and starting the conversation by asking the API to act as a "creative and experienced chef assistant".

Handling the response

The response is well-typed and can be accessed easily:



const generatedOutput = chatCompletion.data.choices[0].message?.content;


Enter fullscreen mode Exit fullscreen mode

Which in this case might result in a "One-Pan Baked Tilapia and Vegetable Dinner" recipe with a complete ingredients list and step-by-step instructions!

This is, of course, a simplification. A full implementation with more options and a user interface might end up looking something like this:

Full UI screenshot with inputs and generated output

Advanced options

An interesting property that's available to us here, that's not available in the chatGPT interface is temperature which is an abstraction of randomness that can add some chaos.

With a temperature of 2, a "recipe" might start looking like this...



Chicken Delight Recipe Parham Style:

Featured Cooking Equipment(set boAirition above stove required Gas-Telian range VMM incorporated rather below ideal temperature during baking ir regulate heat applied):
- Large non-stick frypan(Qarma brand)->Coloning cooking Stenor service(each Product hasown separate reviews dependable optimization features)


Enter fullscreen mode Exit fullscreen mode

Be careful, as this can eat into your tokens! As a safety measure (or depending on your use case), a max_tokens can be used to limit the size of the output.

In my experience with gpt-3.5-turbo, altering the system content did not have much effect in this case, but can be more useful for ongoing conversations. Since my use case is to just ask for a recipe once, there's no need to set up the system "personality".

Limitations

As of this writing, gpt-3.5-turbo is the latest model available to me but it comes with some limitations.

First off, the processing is fairly slow with it taking about 15 seconds to return a recipe. OpenAI suggests a number of improvements in their docs such as limiting output size, caching, and batching.

There is also an inherent limitation that a conversation is "stateless": if you want to have an ongoing conversation, each previous user message and its assistant response need to be sent before each new user message.

In my example application, providing a very limited set of common ingredients (Salt, Pepper, Olive oil, Butter, All-purpose flour, Sugar, Eggs, Milk, Garlic, Onion, Lemons, White Vinegar, Apple Cider Vinegar, Soy sauce, Baking powder, Cumin) still results in chicken- or shrimp-based recipes. I tried getting around this with more specific prompts ("if chicken is not available, do not recommend recipes with chicken.") but to not to much success.

This is an example of a "hallucination" but has not specifically been an issue with GPT-4, which is not yet broadly available via the API.

There are other important general generative AI limitations to keep in mind such as biases and how they are often "confidently incorrect".

In this case, the worst-case scenario is an unappealing meal, but these limitations are important to keep in mind when relying on generated content.

Fine-tuning and cost

As the only user of this application, my costs have been very minimal 😅. A single execution comes out to about 200 input tokens and the output ranges between 300 and 500 tokens. With gpt-3.5-turbo, this comes out to (0.2 * $0.0015) + (.4 * $0.002) or about one tenth of a cent.

Once more generally available, GPT-4 will be significantly more expensive. Currently, a single run for me would amount to (0.2 * $0.03) + (0.4 * $0.06) or about 3 cents.

API pricing was reduced a few weeks ago so it's reasonable to expect GPT-4 to get cheaper in the future, too.

The GPT-3.5 output can still be fine-tuned by more specific and verbose inputs, but since billing is based on number of "tokens" (i.e. input and output length), fine-tuning this way can be costly, similarly to a conversational application that chains user and assistant messages.

Prompts can also be split into smaller, more specific prompts. However, in addition to increasing the total number of tokens, this approach would also increase the complexity (and maintenance cost) of an application, especially if you're using the output of one query as an input of another.

Summary

The header image for this post was created with MidJourney, just another (tiny) example of how I've been using the technology.

Generative AI opens up a wide range of new and exciting applications, but not without additional considerations that should be kept in mind.

Though this application just barely scratches the surface of what can be done, I hope it has served as a useful introduction to integrating the openai library into your web application whether you're building a cool product, or just exploring new technologies.

Have you explored interesting applications of the API, or experimented with different parts of it? Please share!

Top comments (0)