DEV Community

James Li
James Li

Posted on

Advanced LangGraph: Implementing Conditional Edges and Tool-Calling Agents

In the previous articles, we discussed the limitations of LCEL and AgentExecutor, as well as the basic concepts of LangGraph. Today, we will delve into the advanced features of LangGraph, focusing on the use of conditional edges and how to implement a complete tool-calling agent.

Advanced Usage of Conditional Edges

Conditional edges are one of the most powerful features in LangGraph, allowing us to dynamically decide the execution flow based on the state. Let's explore some advanced usages.

1. Multi-condition Routing

from typing import List, Dict, Literal
from pydantic import BaseModel
from langgraph.graph import StateGraph, END

class AgentState(BaseModel):
    messages: List[Dict[str, str]] = []
    current_input: str = ""
    tools_output: Dict[str, str] = {}
    status: str = "RUNNING"
    error_count: int = 0

def route_by_status(state: AgentState) -> Literal["process", "retry", "error", "end"]:
    """Complex routing logic"""
    if state.status == "SUCCESS":
        return "end"
    elif state.status == "ERROR":
        if state.error_count >= 3:
            return "error"
        return "retry"
    elif state.status == "NEED_TOOL":
        return "process"
    return "process"

# Build the graph structure
workflow = StateGraph(AgentState)

# Add conditional edges
workflow.add_conditional_edges(
    "check_status",
    route_by_status,
    {
        "process": "execute_tool",
        "retry": "retry_handler",
        "error": "error_handler",
        "end": END
    }
)
Enter fullscreen mode Exit fullscreen mode

2. Parallel Execution

LangGraph supports parallel execution of multiple nodes, which is particularly useful for handling complex tasks:

async def parallel_tools_execution(state: AgentState) -> AgentState:
    """Parallel execution of multiple tools"""
    tools = identify_required_tools(state.current_input)

    async def execute_tool(tool):
        result = await tool.ainvoke(state.current_input)
        return {tool.name: result}

    # Execute all tools in parallel
    results = await asyncio.gather(*[execute_tool(tool) for tool in tools])

    # Merge results
    tools_output = {}
    for result in results:
        tools_output.update(result)

    return AgentState(
        messages=state.messages,
        current_input=state.current_input,
        tools_output=tools_output,
        status="SUCCESS"
    )
Enter fullscreen mode Exit fullscreen mode

Implementing a Complete Tool-Calling Agent

Let's demonstrate the powerful capabilities of LangGraph by implementing a complete tool-calling agent. This agent can:

  • Understand user input
  • Select the appropriate tool
  • Execute tool calls
  • Generate the final response

1. Define State and Tools

from typing import List, Dict, Optional
from pydantic import BaseModel
from langchain.tools import BaseTool
from langchain.tools.calculator import CalculatorTool
from langchain.tools.wikipedia import WikipediaQueryRun
from langchain_core.language_models import ChatOpenAI

class Tool(BaseModel):
    name: str
    description: str
    func: callable

class AgentState(BaseModel):
    messages: List[Dict[str, str]] = []
    current_input: str = ""
    thought: str = ""
    selected_tool: Optional[str] = None
    tool_input: str = ""
    tool_output: str = ""
    final_answer: str = ""
    status: str = "STARTING"

# Define available tools
tools = [
    Tool(
        name="calculator",
        description="Used for performing mathematical calculations",
        func=CalculatorTool()
    ),
    Tool(
        name="wikipedia",
        description="Used for querying Wikipedia information",
        func=WikipediaQueryRun()
    )
]
Enter fullscreen mode Exit fullscreen mode

2. Implement Core Nodes

async def think(state: AgentState) -> AgentState:
    """Think about the next action"""
    prompt = f"""
    Based on user input and current conversation history, think about the next action.
    User input: {state.current_input}
    Available tools: {[t.name + ': ' + t.description for t in tools]}
    Decide:
    1. Whether a tool is needed
    2. If needed, which tool to use
    3. What parameters to call the tool with
    Return in JSON format: {{"thought": "thought process", "need_tool": true/false, "tool": "tool name", "tool_input": "parameters"}}
    """
    llm = ChatOpenAI(temperature=0)
    response = await llm.ainvoke(prompt)
    result = json.loads(response)
    return AgentState(
        **state.dict(),
        thought=result["thought"],
        selected_tool=result.get("tool"),
        tool_input=result.get("tool_input"),
        status="NEED_TOOL" if result["need_tool"] else "GENERATE_RESPONSE"
    )

async def execute_tool(state: AgentState) -> AgentState:
    """Execute tool call"""
    tool = next((t for t in tools if t.name == state.selected_tool), None)
    if not tool:
        return AgentState(
            **state.dict(),
            status="ERROR",
            thought="Selected tool not found"
        )
    try:
        result = await tool.func.ainvoke(state.tool_input)
        return AgentState(
            **state.dict(),
            tool_output=str(result),
            status="GENERATE_RESPONSE"
        )
    except Exception as e:
        return AgentState(
            **state.dict(),
            status="ERROR",
            thought=f"Tool execution failed: {str(e)}"
        )

async def generate_response(state: AgentState) -> AgentState:
    """Generate the final response"""
    prompt = f"""
    Generate a response to the user based on the following information:
    User input: {state.current_input}
    Thought process: {state.thought}
    Tool output: {state.tool_output}
    Please generate a clear and helpful response.
    """
    llm = ChatOpenAI(temperature=0.7)
    response = await llm.ainvoke(prompt)
    return AgentState(
        **state.dict(),
        final_answer=response,
        status="SUCCESS"
    )
Enter fullscreen mode Exit fullscreen mode

3. Build the Complete Workflow

# Create graph structure
workflow = StateGraph(AgentState)

# Add nodes
workflow.add_node("think", think)
workflow.add_node("execute_tool", execute_tool)
workflow.add_node("generate_response", generate_response)

# Add edges
workflow.add_edge("think", "execute_tool", condition=lambda s: s.status == "NEED_TOOL")
workflow.add_edge("execute_tool", "generate_response", condition=lambda s: s.status == "GENERATE_RESPONSE")
workflow.add_edge("generate_response", "think", condition=lambda s: s.status == "SUCCESS")
Enter fullscreen mode Exit fullscreen mode

Summary

This example demonstrates how LangGraph simplifies the process of building complex AI workflows by providing high-level abstractions and pre-built components. The flexibility of LangGraph allows us to customize and extend agent functionalities as needed, making it a valuable tool for AI development.

Top comments (0)