DEV Community

Cover image for Building Stateful LLM Agents with LangGraph πŸ€–βœ¨
CyprianTinasheAarons
CyprianTinasheAarons

Posted on • Edited on

Building Stateful LLM Agents with LangGraph πŸ€–βœ¨

LangGraph is a powerful library for building stateful, multi-actor applications with LLMs. By modeling LLM agent workflows as graphs, LangGraph enables developers to create flexible and interactive agent and multi-agent workflows.

In this article, we'll walk through building a simple agent using LangGraph, with OpenAI and TavilySearch as the core components. Both require API keysβ€”TavilySearch offers free credits, while OpenAI requires you to add funds. πŸ’Έ

πŸŽ‰ Getting Started with LangGraph

To start, clone the repository: πŸ‘‡

git clone https://github.com/CyprianTinasheAarons/basic-bot-langgraph
Enter fullscreen mode Exit fullscreen mode

Set up the project environment: πŸ› οΈ

cd basic-bot
python -m venv .venv
Enter fullscreen mode Exit fullscreen mode

Activate the virtual environment:

  • On Windows:
  .venv\Scripts\activate
Enter fullscreen mode Exit fullscreen mode
  • On macOS and Linux:
  source .venv/bin/activate
Enter fullscreen mode Exit fullscreen mode

Install dependencies: πŸ“¦

pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Set up environment variables: 🌱

  • Copy .env.example to .env.
  • Fill in your API keys in the .env file.

πŸ› οΈ Key Libraries and Configurations

Here are some of the key libraries that are required for our project:

import os
from typing import Literal
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, StateGraph, MessagesState
from langgraph.prebuilt import ToolNode
from dotenv import load_dotenv
from langchain_community.tools.tavily_search import TavilySearchResults
from loguru import logger
Enter fullscreen mode Exit fullscreen mode

πŸ” Configure Logging

We use Loguru to handle logging in this project:

# Configure loguru
logger.add("bot.log", rotation="10 MB")
Enter fullscreen mode Exit fullscreen mode

🌐 Load Environment Variables

load_dotenv()
logger.info("Environment variables loaded")
Enter fullscreen mode Exit fullscreen mode

πŸ”§ Defining Tools and Nodes

Define the web search tool: πŸ”Ž

web_search = TavilySearchResults(max_results=2)
Enter fullscreen mode Exit fullscreen mode

Define a function to perform a web search: 🌍

def search(query: str):
    """Call to surf the web."""
    logger.debug(f"Performing web search for query: {query}")
    return web_search.invoke({"query": query})
Enter fullscreen mode Exit fullscreen mode

Define and create the tool node: πŸ› οΈ

tools = [search]

tool_node = ToolNode(tools)
logger.info("Tool node created")
Enter fullscreen mode Exit fullscreen mode

πŸ€– Initializing the LLM Model

Here, we initialize an LLM using OpenAI's GPT-4: 🧠

llm = ChatOpenAI(model="gpt-4o", temperature=0, api_key=os.getenv("OPENAI_API_KEY"))
logger.info("LLM model initialized")
Enter fullscreen mode Exit fullscreen mode

Bind the tools to the LLM: πŸ”—

llm_with_tools = llm.bind_tools(tools)
Enter fullscreen mode Exit fullscreen mode

πŸ”„ Defining Workflow Logic

Define a function that determines the conversation flow: πŸ”€

def should_continue(state: MessagesState) -> Literal["tools", END]:
    messages = state['messages']
    last_message = messages[-1]
    if last_message.tool_calls:
        logger.debug("Routing to tools node")
        return "tools"
    logger.debug("Ending conversation")
    return END
Enter fullscreen mode Exit fullscreen mode

Define the function that calls the LLM model: πŸ—£οΈ

def call_model(state: MessagesState):
    messages = state['messages']
    logger.debug(f"Calling LLM with {len(messages)} messages")
    response = llm_with_tools.invoke(messages)
    return {"messages": [response]}
Enter fullscreen mode Exit fullscreen mode

πŸ—οΈ Building the Graph

Initialize the StateGraph: βš™οΈ

graph = StateGraph(MessagesState)
logger.info("StateGraph initialized")
Enter fullscreen mode Exit fullscreen mode

Add nodes to the graph: βž•

graph.add_node("weatherbot", call_model)
graph.add_node("tools", tool_node)
logger.info("Nodes added to the graph")
Enter fullscreen mode Exit fullscreen mode

Set the entry point: πŸšͺ

graph.add_edge(START, "weatherbot")
Enter fullscreen mode Exit fullscreen mode

Add conditional edges: πŸ”—

graph.add_conditional_edges("weatherbot", should_continue)
Enter fullscreen mode Exit fullscreen mode

Add a normal edge from tools to weather bot: πŸ”„

graph.add_edge("tools", 'weatherbot')
logger.info("Graph edges configured")
Enter fullscreen mode Exit fullscreen mode

Compile the graph: βœ…

app = graph.compile()
logger.info("Graph compiled successfully")
Enter fullscreen mode Exit fullscreen mode

πŸ“Š Saving Graph Visualization

try:
    graph_png = app.get_graph().draw_mermaid_png()
    with open("graph.png", "wb") as f:
        f.write(graph_png)
    logger.info("Graph visualization saved as graph.png")
except Exception as e:
    logger.error(f"Failed to save graph visualization: {e}")
Enter fullscreen mode Exit fullscreen mode

πŸš€ Running the Agent

Invoke the app with a sample query: πŸ—¨οΈ

logger.info("Invoking the app with a sample query")
final_state = app.invoke(
    {"messages": [HumanMessage(content="what is the weather in sf")]},
    config={"configurable": {"thread_id": 42}}
)
Enter fullscreen mode Exit fullscreen mode

πŸ–₯️ Usage

Run the bot using: πŸ’»

python bot.py
Enter fullscreen mode Exit fullscreen mode

πŸŽ‰ Conclusion

Congratulations on building your first agent using LangGraph! 🎊 This is just the beginning of what you can achieve with LangGraph, LangChain, and LLMs. Stay tuned for more projects and tutorials exploring the exciting intersection of AI, LLMs, and agent workflows. πŸš€πŸ€—

Feel free to follow me on Twitter for more updates and projects. Also, check out my website here. 🌐✨

πŸ“š Resources


Buy Me A Coffee

Top comments (0)