DEV Community

Cover image for Building Your First AI Agent with LangChain and Open APIs
Santhosh Vijayabaskar
Santhosh Vijayabaskar

Posted on

Building Your First AI Agent with LangChain and Open APIs

I am sure we all have been hearing about AI agents and are not sure where to begin đŸ€”; no worries—you're in the right place! In this article, I am going to introduce you to the world of AI Agents and walk you through step-by-step how to build your first AI agent with LangChain.

LangChain is an incredibly useful tool for connecting AI models to various outbound APIs. In this guided tutorial, we will build our first Agent and connect it to Open API Weather data đŸŒŠïž to make it more interactive and practical.

By the time we're done, you will have your own AI agent đŸ€–, which can chat, pull in live data, and do so much more!


đŸ€– What is an AI Agent? Let’s Break it Down

AI Agents are like a supercharged virtual assistant that’s always ready to help. Whether it’s answering your questions, doing small tasks for you, or even making decisions, this AI agent is like having a digital helper at your disposal. It can do everything from fetching data to creating content or even having a conversation with you. Pretty cool, right?😎

AI agents aren’t just static—they’re smart, dynamic, and capable of working on their own, thanks to the power of large language models (LLMs) like GPT-3 or GPT-4.

đŸ§© What is LangChain? A Developer-Friendly Powerhouse

LangChain is a developer-friendly framework that connects AI models (like GPT-3 or GPT-4) with external tools and data. It helps create structured workflows where the AI agent can talk to APIs or databases to fetch information.

Why LangChain?

  • Easy to use: It simplifies integrating large language models with other tools (Jira, Salesforce, Calendars, Database, etc).
  • Scalable: You can build anything from a basic chatbot to a complex multi-agent system.
  • Community-driven: With a large, active community, LangChain provides a wealth of documentation, examples, and support.

In our case, we’re building a simple agent that can answer questions, and to make things cooler, it’ll retrieve real-time data like weather information. Let's dive in!

Step 1: Setting Up Your Environment

In this section, let's set up our development environment.

1.1 Install Python (if you haven’t already)

Make sure you have Python installed. You can download it from python.org. Once installed, verify it by running:

python --version
Enter fullscreen mode Exit fullscreen mode

1.2 Install LangChain
Now let’s install LangChain via pip. For those who are new to Python, PIP is a package manager for Python packages. Open your terminal and run:

pip install langchain
Enter fullscreen mode Exit fullscreen mode

1.3 Install OpenAI
We’ll also be using the OpenAI API to interact with GPT-3, so you’ll need to install the OpenAI Python client:

pip install openai
Enter fullscreen mode Exit fullscreen mode

1.4 Set Up a Virtual Environment (Optional)
It’s a good practice to work in a virtual environment to keep your project dependencies separate:

python -m venv langchain-env
source langchain-env/bin/activate   # For Mac/Linux

# or for Windows
langchain-env\Scripts\activate
Enter fullscreen mode Exit fullscreen mode

Step 2: Building Your First AI Agent

Now comes the fun part—let’s build our first AI agent! In this step, we’ll create an agent that can have a simple conversation using OpenAI’s language model. For this, you’ll need an API key from OpenAI, which you can get by signing up at OpenAI.

Here’s a small snippet to create your first agent:

from langchain.llms import OpenAI

# Initialize the model
llm = OpenAI(api_key="your-openai-api-key")

# Define a prompt for the agent
prompt = "What is the weather like in New York today?"

# Get the response from the AI agent
response = llm(prompt)
print(response)
Enter fullscreen mode Exit fullscreen mode

In the above code, we’re setting up a very basic agent that takes a prompt (a question about the weather) and returns a response from GPT-3. At this point, the agent doesn’t actually retrieve live weather data—it’s just generating a response based on the language model’s knowledge.

Step 3: Connecting to an Open API (Weather API)

Now let’s step things up by integrating real-time data into our agent. We’re going to connect it to a weather API, which will allow the agent to retrieve live weather information đŸŒŠïž.

Here’s how you do it.

  1. Get an API Key from OpenWeather
    Head over to OpenWeather and sign up for a free API key.

  2. Make the API Request
    In this next part, we’ll modify our agent so that it fetches live weather data from OpenWeather’s API, and then outputs it as part of the conversation.

import requests
from langchain.llms import OpenAI

def get_weather(city):
    api_key = "your-openweather-api-key"
    url = f"http://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
    response = requests.get(url).json()

    # Extract relevant data
    temp = response['main']['temp']
    description = response['weather'][0]['description']
    return f"The current temperature in {city} is {temp}°C with {description}."

# Now use the LangChain LLM model to integrate this data
llm = OpenAI(api_key="your-openai-api-key")
city = "New York"
weather_info = get_weather(city)
prompt = f"Tell me about the weather in {city}: {weather_info}"

response = llm(prompt)
print(response)
Enter fullscreen mode Exit fullscreen mode

In the above code, the get_weather function makes a request to the OpenWeather API and extracts data like temperature and weather description.

The response is then integrated into the AI agent’s output, making it look like the agent is providing up-to-date weather information.

Step 4: Deploying Your AI Agent as an API

Now that our agent can chat and retrieve live data, let’s make it accessible to others by turning it into an API. This way, anyone can interact with the agent through HTTP requests.

Using FastAPI for Deployment

FastAPI is a powerful web framework that makes it easy to create APIs in Python. Here’s how we can deploy our agent using FastAPI:

from fastapi import FastAPI
from langchain.llms import OpenAI
import requests

app = FastAPI()

def get_weather(city):
    api_key = "your-openweather-api-key"
    url = f"http://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
    response = requests.get(url).json()
    temp = response['main']['temp']
    description = response['weather'][0]['description']
    return f"The weather in {city} is {temp}°C with {description}."

llm = OpenAI(api_key="your-openai-api-key")

@app.get("/ask")
def ask_question(city: str):
    weather = get_weather(city)
    prompt = f"Tell me about the weather in {city}: {weather}"
    response = llm(prompt)
    return {"response": response}
Now you can run this API locally and access it by sending HTTP requests to http://localhost:8000/ask?city=New York.
Enter fullscreen mode Exit fullscreen mode

Conclusion: What’s Next?

Congratulations!🎉 You’ve just built your first AI agent from scratch and connected it to an open API to fetch real-time data. You’ve also deployed your agent as an API that others can interact with. From here, the possibilities are endless—you can integrate more APIs, build multi-agent systems, or deploy it on cloud platforms for broader use.🚀

If you’re ready for more đŸ”„, and want to explore advanced features of LangChain, like memory management for long conversations, or dive into multi-agent systems to handle more complex tasks, do let me know in the comments below.

Have fun experimenting, and feel free to drop your thoughts in the comments below!💬

🌐 You can also learn more about my work and projects at https://santhoshvijayabaskar.com

Top comments (0)