DEV Community

Cover image for Building a Local AI Code Reviewer with ClientAI and Ollama - Part 2
Igor Benav
Igor Benav

Posted on

Building a Local AI Code Reviewer with ClientAI and Ollama - Part 2

In Part 1, we built the core analysis tools for our code reviewer. Now we'll create an AI assistant that can use these tools effectively. We'll go through each component step by step, explaining how everything works together.

For ClientAI's docs see here and for Github Repo, here.

Series Index

Registering Our Tools with ClientAI

First, we need to make our tools available to the AI system. Here's how we register them:

def create_review_tools() -> List[ToolConfig]:
    """Create the tool configurations for code review."""
    return [
        ToolConfig(
            tool=analyze_python_code,
            name="code_analyzer",
            description=(
                "Analyze Python code structure and complexity. "
                "Expects a 'code' parameter with the Python code as a string."
            ),
            scopes=["observe"],
        ),
        ToolConfig(
            tool=check_style_issues,
            name="style_checker",
            description=(
                "Check Python code style issues. "
                "Expects a 'code' parameter with the Python code as a string."
            ),
            scopes=["observe"],
        ),
        ToolConfig(
            tool=generate_docstring,
            name="docstring_generator",
            description=(
                "Generate docstring suggestions for Python code. "
                "Expects a 'code' parameter with the Python code as a string."
            ),
            scopes=["act"],
        ),
    ]
Enter fullscreen mode Exit fullscreen mode

Let's break down what's happening here:

  1. Each tool is wrapped in a ToolConfig object that tells ClientAI:

    • tool: The actual function to call
    • name: A unique identifier for the tool
    • description: What the tool does and what parameters it expects
    • scopes: When the tool can be used ("observe" for analysis, "act" for generation)
  2. We classify our tools into two categories:

    • "observe" tools (code_analyzer and style_checker) gather information
    • "act" tools (docstring_generator) produce new content

Building the AI Assistant Class

Now let's create our AI assistant. We'll design it to work in steps, mimicking how a human code reviewer would think:

class CodeReviewAssistant(Agent):
    """An agent that performs comprehensive Python code review."""

    @observe(
        name="analyze_structure",
        description="Analyze code structure and style",
        stream=True,
    )
    def analyze_structure(self, code: str) -> str:
        """Analyze the code structure, complexity, and style issues."""
        self.context.state["code_to_analyze"] = code
        return """
        Please analyze this Python code structure and style:

        The code to analyze has been provided in the context as 'code_to_analyze'.
        Use the code_analyzer and style_checker tools to evaluate:
        1. Code complexity and structure metrics
        2. Style compliance issues
        3. Function and class organization
        4. Import usage patterns
        """
Enter fullscreen mode Exit fullscreen mode

This first method is crucial:

  • The @observe decorator marks this as an observation step
  • stream=True enables real-time output
  • We store the code in the context to access it in later steps
  • The return string is a prompt that guides the AI in using our tools

Next, we add the improvement suggestion step:

    @think(
        name="suggest_improvements",
        description="Suggest code improvements based on analysis",
        stream=True,
    )
    def suggest_improvements(self, analysis_result: str) -> str:
        """Generate improvement suggestions based on the analysis results."""
        current_code = self.context.state.get("current_code", "")
        return f"""
        Based on the code analysis of:

        ```
{% endraw %}
python
        {current_code}
{% raw %}

        ```

        And the analysis results:
        {analysis_result}

        Please suggest specific improvements for:
        1. Reducing complexity where identified
        2. Fixing style issues
        3. Improving code organization
        4. Optimizing import usage
        5. Enhancing readability
        6. Enhancing explicitness
        """
Enter fullscreen mode Exit fullscreen mode

This method:

  • Uses @think to indicate this is a reasoning step
  • Takes the analysis results as input
  • Retrieves the original code from context
  • Creates a structured prompt for improvement suggestions

The Command-Line Interface

Now let's create a user-friendly interface. We'll break this down into parts:

def main():
    # 1. Set up logging
    logger = logging.getLogger(__name__)

    # 2. Configure Ollama server
    config = OllamaServerConfig(
        host="127.0.0.1",  # Local machine
        port=11434,        # Default Ollama port
        gpu_layers=35,     # Adjust based on your GPU
        cpu_threads=8,     # Adjust based on your CPU
    )
Enter fullscreen mode Exit fullscreen mode

This first part sets up error logging, configures the Ollama server with sensible defaults and allows customization of GPU and CPU usage.

Next, we create the AI client and assistant:

    # Use context manager for Ollama server
    with OllamaManager(config) as manager:
        # Initialize ClientAI with Ollama
        client = ClientAI(
            "ollama", 
            host=f"http://{config.host}:{config.port}"
        )

        # Create code review assistant with tools
        assistant = CodeReviewAssistant(
            client=client,
            default_model="llama3",
            tools=create_review_tools(),
            tool_confidence=0.8,  # How confident the AI should be before using tools
            max_tools_per_step=2, # Maximum tools to use per step
        )
Enter fullscreen mode Exit fullscreen mode

Key points about this setup:

  • The context manager (with) ensures proper server cleanup
  • We connect to the local Ollama instance
  • The assistant is configured with:
    • Our custom tools
    • A confidence threshold for tool usage
    • A limit on tools per step to prevent overuse

Finally, we create the interactive loop:

        print("Code Review Assistant (Local AI)")
        print("Enter Python code to review, or 'quit' to exit.")
        print("End input with '###' on a new line.")

        while True:
            try:
                print("\n" + "=" * 50 + "\n")
                print("Enter code:")

                # Collect code input
                code_lines = []
                while True:
                    line = input()
                    if line == "###":
                        break
                    code_lines.append(line)

                code = "\n".join(code_lines)
                if code.lower() == "quit":
                    break

                # Process the code
                result = assistant.run(code, stream=True)

                # Handle both streaming and non-streaming results
                if isinstance(result, str):
                    print(result)
                else:
                    for chunk in result:
                        print(chunk, end="", flush=True)
                print("\n")

            except Exception as e:
                logger.error(f"Unexpected error: {e}")
                print("\nAn unexpected error occurred. Please try again.")
Enter fullscreen mode Exit fullscreen mode

This interface:

  • Collects multiline code input until seeing "###"
  • Handles both streaming and non-streaming output
  • Provides clean error handling
  • Allows easy exit with "quit"

And let's make it a script we're able to run:

if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

Using the Assistant

Let's see how the assistant handles real code. Let's run it:

python code_analyzer.py
Enter fullscreen mode Exit fullscreen mode

Here's an example with issues to find:

def calculate_total(values,tax_rate):
    Total = 0
    for Val in values:
        if Val > 0:
            if tax_rate > 0:
                Total += Val + (Val * tax_rate)
            else:
                Total += Val
    return Total
Enter fullscreen mode Exit fullscreen mode

The assistant will analyze multiple aspects:

  • Structural Issues (nested if statements increasing complexity, missing type hints, no input validation)
  • Style Problems (inconsistent variable naming, missing spaces after commas, missing docstring)

Extension Ideas

Here are some ways to enhance the assistant:

  • Additional Analysis Tools
  • Enhanced Style Checking
  • Documentation Improvements
  • Auto-fixing Features

Each of these can be added by creating a new tool function, wrapping it in appropriate JSON formatting, adding it to the create_review_tools() function and then updating the assistant's prompts to use the new tool.

To see more about ClientAI, go to the docs.

Connect with Me

If you have any questions, want to discuss tech-related topics, or share your feedback, feel free to reach out to me on social media:

Top comments (0)