DEV Community

Stephen Collins
Stephen Collins

Posted on

Crafting Prompt Templates for Code Generation

Prompt templates are tools to offer an LLM (large language model) a structured input. This is especially useful for asking an LLM to create functions that match a specific signature (a function's name, input types and output types).

Example of a Generic Prompt Template

Here is a generic Prompt Template designed for constructing a TypeScript function. I'll spend the rest of the article breaking it down and explaining it bit by bit, and then I'll demonstrate how GPT-4, Claude, and Cohere's "Generate" model generate output in response to a filled out prompt template.

### Function Generation Prompt Template:

---

Function Purpose: [BRIEF DESCRIPTION OF WHAT THE FUNCTION SHOULD ACHIEVE]

Function Name: [DESIRED NAME OF THE FUNCTION]

Function Input Parameters: [LIST OF PARAMETERS THE FUNCTION SHOULD ACCEPT]

Expected Actions: [A DETAILED LIST OF ACTIONS THE FUNCTION SHOULD PERFORM]

Return Type: [THE DATA TYPE OR STRUCTURE THAT THE FUNCTION SHOULD RETURN]

API Endpoint: [API ENDPOINT URL]

API Key: [YOUR API ACCESS KEY]

Additional Notes: [ANY ADDITIONAL INFORMATION OR CONTEXT FOR THE FUNCTION]

---

### Generated Code (Template):

---

```

typescript

const axios = require('axios'); // Assuming axios is used for making HTTP requests

async function [FUNCTION NAME]([FUNCTION INPUT PARAMETERS]): Promise<[RETURN TYPE]> {

const endpoint = '[API ENDPOINT URL]';

const apiKey = '[YOUR API ACCESS KEY]';

// [EXPECTED ACTIONS IMPLEMENTATION]

}


```
```

`

_NOTE: while this prompt template says "YOUR_ACCESS_KEY", at no point do you actually need to give the LLM your actual key, this is just another placeholder for the LLM to recognize, as you'll see later in the post_

This entire generic prompt template I asked GPT-4 to come up with itself. And I asked it to "fill out" the prompt template for the example in this blog post (to fetch the current weather) before then asking it to try to write the function.

Prompt templates seem to enable consistency of output, especially for code, when adhering to the same prompt template for creating different functions. This consistent output proves beneficial for developing "plugin systems", where various functions need to be in harmony without the orchestrating system understanding each function's intricacies. Additionally, it's useful when you want an LLM to align with a specific code pattern, like when upholding a coding standard for type creation across a codebase.

Normally, adding capabilities to such a plugin system one by one would either be tedious if done manually, or by using a [code-mod](https://nextjs.org/docs/pages/building-your-application/upgrading/codemods#:~:text=Codemods%20are%20transformations%20that%20run,API%20is%20updated%20or%20deprecated.) would only scaffold the function, but here not only are we setting up the function with the expected signatures, but what an LLM will generally do is add psuedo code in the comments (for whatever code it doesn't feel like it can give you) that you can use in a follow up with the LLM to fully implement the function together.

While developers remain essential in the process of using prompt templates, creating applications becomes quicker and with less effort. By following a prompt template, you simply discuss your goals with an LLM, then request it to complete your template, producing code in the desired pattern. This is what sets apart the use of prompt templates in code generation. With a few meticulously designed templates, you can compose a significant portion of an application in a fraction of the usual time.

### Breaking down the Prompt Template structure

By no means is the generic prompt template in this blog post the "only" way to create a prompt template. It is simply what GPT-4 gave me when I asked for it to help me come up with a robust prompt template, and that other LLMs seem to respect it remarkably well. With that said, let's go over each part:

#### Function Purpose

What this function should do (a bit of detail goes a long way).

#### Function Name

What the function should be called. This is important especially for creating homogenous function signatures or adhering to a naming convention.

#### Function Input Parameters

The names and types of the input parameters, whether destructured or not (for TypeScript).

#### Expected Actions

What are the general actions (or steps) this function should do. This is providing more context underneath "Function Purpose".

#### API Endpoint

What endpoint this function should make a request too. Obviously, only relevant if the function needs to hit a REST API endpoint. If you are making a GraphQL request, or want this function to send an AMQP message, or something else entirely - ask GPT-4 to give you a modified prompt template for that! It generally knows best for what it needs in order to give you what you want.

#### API KEY

Same point as the previous section, only relevant if making an API request. And don't actually pass your api keys!

#### Additional Notes

Any additional context you want the LLM to consider. While this part of the prompt template is more flexible than other parts, flexibility with LLMs is a double-edged sword; this flexibility may steer the LLM to output something slightly different than what you want. Tread carefully!

### Example Prompt Template for Fetching Weather Data

Here is the example "filled out" prompt template (that I asked GPT-4 to write as well) and then the response by both GPT-4 and by Claude. Be sure to pass the entire preceding "generic" prompt template into the same LLM prompt as the following example, to write a TypeScript function to get the current weather:

```markdown
### Function Generation Prompt Template for WeatherStack API:

---

**Function Purpose**: Fetch the current weather for a specified city using the WeatherStack API.

**Function Name**: `fetchWeather`

**Function Input Parameters**: `city: string`

**Expected Actions**:

1. Use the "weatherstack" API to retrieve current weather information for the specified city.
2. Parse and extract necessary information from the API response.
3. Return the parsed weather data.

**Return Type**: `{ temperature: number, description: string }`

**API Endpoint**: `https://api.weatherstack.com/current`

**API Key**: `YOUR_ACCESS_KEY`

**Additional Notes**: The function should handle potential API errors gracefully.

---

### Generated Code (Template):
```

Here is GPT-4's output from "filled-out" prompt template:

`

```markdown
---

```
{% endraw %}
typescript
const axios = require("axios")

async function fetchWeather(
  city: string
): Promise<{ temperature: number; description: string }> {
  const endpoint = "https://api.weatherstack.com/current"
  const apiKey = "YOUR_ACCESS_KEY"

  try {
    // Requesting data from weatherstack API
    const response = await axios.get(endpoint, {
      params: {
        access_key: apiKey,
        query: city,
      },
    })

    // Checking if the response contains the required data
    if (response.data && response.data.current) {
      return {
        temperature: response.data.current.temperature,
        description: response.data.current.weather_descriptions[0],
      }
    } else {
      throw new Error("Failed to fetch weather data.")
    }
  } catch (error) {
    console.error({% raw %}`Error fetching weather for ${city}:`{% endraw %}, error)
    throw error
  }
}
{% raw %}

```

**Notes**:

1. It's recommended to replace `'YOUR_ACCESS_KEY'` with your actual WeatherStack API key before using the function.
2. You might also want to add some logic to deal with the rate limits or restrictions of the WeatherStack API depending on your use case.
3. Ensure you have the `axios` library installed in your project to make HTTP requests.
```

`

These models are giving a little code explanation afterwards, because we aren't being more specific about only generating code. And last, here is Cohere's "Generate" model output:

`

````markdown
Here is a generated function to fetch current weather data from the WeatherStack API based on the provided template:

```

js
const axios = require("axios")

async function fetchWeather(
  city: string
): Promise<{ temperature: number, description: string }> {
  const endpoint = "https://api.weatherstack.com/current"

  const apiKey = "YOUR_ACCESS_KEY"

  try {
    const response = await axios.get(endpoint, {
      params: {
        access_key: apiKey,
        query: city,
      },
    })

    const weatherData = response.data.current

    return {
      temperature: weatherData.temperature,
      description: weatherData.weather_descriptions[0],
    }
  } catch (error) {
    // Handle potential errors
  }
}


```

The function makes a request to the WeatherStack API using axios, extracts the current temperature and weather description, and returns an object containing those values. It also includes error handling and placeholders for the API endpoint and key. Let me know if you would like me to modify or expand on this generated code!```

`

And finally here is Cohere's, again with no other prompting other than 1) the generic prompt template to give the LLM clues about the desired structure of output and 2) the filled out prompt template:

`

```markdown
```

typescript
import axios from "axios"

async function fetchWeather(
  city: string
): Promise<{ temperature: number; description: string }> {
  const endpoint = "https://api.weatherstack.com/current"
  const apiKey = "YOUR_ACCESS_KEY"

  try {
    const response = await axios.get(endpoint, {
      params: {
        city,
        api_key: apiKey,
      },
    })

    const data = response.data

    return {
      temperature: data.temperature,
      description: data.description,
    }
  } catch (error) {
    console.error(error)
    throw new Error("Error fetching weather data")
  }
}

export { fetchWeather }


```

The provided code snippet demonstrates the implementation of a function called fetchWeather that uses the axios library to make a request to the WeatherStack API. The function takes a city parameter as input and returns a promise that resolves with the current weather data for that city.

The function first imports the axios library and sets it as a global variable. Then, the function defines the input parameter city and sets it as a string type. The function also sets the return type as a promise that resolves with a WeatherData object.

The function then defines the endpoint URL for the WeatherStack API and sets the API key as a string. The function then creates a new ax
```

`
`

Enter fullscreen mode Exit fullscreen mode

*NOTE: Cohere's output was a little cut off in the final explanation. We could tweak the prompt template further to explicitly state no other output than code.

Next Steps

What if the LLM doesn't know about an API you want to use, or you want it to use a particular service or 3rd party package that it hasn't been trained on? This is where embeddings come into play, for vector search to find either the relevant functions or documentation to "inject" into the prompt along with the prompt template. Exploring embeddings with vector search would expand the capabilities of LLMs to generate the functions you are expecting using code and documentation they haven't seen before.

Perhaps diving into how to use embeddings with vector search along side prompt templates is a topic for a future blog post!

Conclusion

While this example for fetching the weather is pretty simple, the possibilities that prompt templates bring to code generation are limitless. This is a beginning of a new wave of "prompt-driven code generation", to make current code-mods look crude and caveman-like. Personally, I was a bit surprised at how consistent the output is, even across LLMs like Claude and GPT-4 (and Cohere's Generate model). For an LLM to understand exactly what you want by reducing ambiguity with a generic prompt template (for structure) and "filled-out" for giving it enough detail to understand how to match the expected output, your code generation with an LLM can become significantly more reliable, effective, faster - and best of all, easier.

Here is the example generic Prompt Template with filled out example for you to try with an LLM that we used in this blog post:


`markdown
### Function Generation Prompt Template:

---

Function Purpose: [BRIEF DESCRIPTION OF WHAT THE FUNCTION SHOULD ACHIEVE]

Function Name: [DESIRED NAME OF THE FUNCTION]

Function Input Parameters: [LIST OF PARAMETERS THE FUNCTION SHOULD ACCEPT]

Expected Actions: [A DETAILED LIST OF ACTIONS THE FUNCTION SHOULD PERFORM]

Return Type: [THE DATA TYPE OR STRUCTURE THAT THE FUNCTION SHOULD RETURN]

API Endpoint: [API ENDPOINT URL]

API Key: [YOUR API ACCESS KEY]

Additional Notes: [ANY ADDITIONAL INFORMATION OR CONTEXT FOR THE FUNCTION]

---

### Generated Code (Template):

---

```typescript

const axios = require('axios'); // Assuming axios is used for making HTTP requests

async function [FUNCTION NAME]([FUNCTION INPUT PARAMETERS]): Promise<[RETURN TYPE]> {

const endpoint = '[API ENDPOINT URL]';

const apiKey = '[YOUR API ACCESS KEY]';

// [EXPECTED ACTIONS IMPLEMENTATION]

}
```

### Function Generation Prompt Template for WeatherStack API:

---

**Function Purpose**: Fetch the current weather for a specified city using the WeatherStack API.

**Function Name**: `fetchWeather`

**Function Input Parameters**: `city: string`

**Expected Actions**:

1. Use the "weatherstack" API to retrieve current weather information for the specified city.
2. Parse and extract necessary information from the API response.
3. Return the parsed weather data.

**Return Type**: `{ temperature: number, description: string }`

**API Endpoint**: `https://api.weatherstack.com/current`

**API Key**: `YOUR_ACCESS_KEY`

**Additional Notes**: The function should handle potential API errors gracefully.

---

### Generated Code (Template):
`

Enter fullscreen mode Exit fullscreen mode

Top comments (0)