DEV Community

Cover image for Crafting a Powerful API Performance CLI: Approach with .NET Core and System.CommandLine
Alisson Podgurski
Alisson Podgurski

Posted on

Crafting a Powerful API Performance CLI: Approach with .NET Core and System.CommandLine

Hey there, fellow developers! Ever felt the need to quickly test the performance of your APIs, but wished you had a nifty little tool right at your fingertips? Well, you're in luck! Today, we’re going to build a Command-Line Interface (CLI) tool using .NET Core that does just that. Not only will it help you test your API performance, but it’ll also let you compare different test runs to see if your recent changes made things better or, well, not so much.

Step 1: Setting Up the Project

First things first, let’s get our environment ready.

  1. Create a New Project

Fire up your terminal and create a new .NET Core console project. This will be the foundation of our CLI tool.

dotnet new console -n ApiPerformanceTool
cd ApiPerformanceTool
Enter fullscreen mode Exit fullscreen mode
  1. Add the Necessary Dependencies We’ll need a couple of packages to make our CLI smooth and user-friendly:
dotnet add package System.CommandLine
dotnet add package System.Net.Http
dotnet add package System.CommandLine.NamingConventionBinder
Enter fullscreen mode Exit fullscreen mode

These packages will help us handle command-line arguments and make HTTP requests to your API.

Step 2: Crafting the Core Functionality

Now, let’s get to the good stuff—building the core features of our CLI tool. We’re going to break down our functionality into small, single-responsibility classes (thank you, SOLID principles!).

Setting Up the Root Command
We’ll start by setting up the root command for our CLI. This will be the entry point where we define our test and compare commands.

using System.CommandLine;
using System.CommandLine.Invocation;

namespace ApiPerformanceTool
{
    class Program
    {
        static async Task<int> Main(string[] args)
        {
            var rootCommand = new RootCommand
            {
                CommandFactory.CreateTestCommand(),
                CommandFactory.CreateCompareCommand()
            };

            return await rootCommand.InvokeAsync(args);
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Here, we’re using a CommandFactory to keep things organized. It’s a neat trick to separate command creation into its own class.

Creating the Test Command
Let’s dive into creating the test command. This command will hit your API with a bunch of requests and measure how long it takes to respond.

public static class CommandFactory
{
    public static Command CreateTestCommand()
    {
        var command = new Command("test", "Tests the performance of an API endpoint")
        {
            new Argument<string>("url", "The URL of the API endpoint to test"),
            new Option<int>("--concurrent", () => 10, "Number of concurrent requests"),
            new Option<string>("--output", "File to save the performance report")
        };

        command.Handler = CommandHandler.Create<string, int, string>(async (url, concurrent, output) =>
        {
            var tester = new ApiTester(new HttpClient());
            var result = await tester.TestPerformanceAsync(url, concurrent);

            Console.WriteLine($"Average response time: {result.AverageResponseTime} ms");
            Console.WriteLine($"95th percentile: {result.Percentile95} ms");

            if (!string.IsNullOrEmpty(output))
            {
                var reportWriter = new ReportWriter();
                await reportWriter.SaveAsync(output, result);
            }
        });

        return command;
    }
}

Enter fullscreen mode Exit fullscreen mode

What’s happening here?

  • We’re setting up the test command to accept the API URL, the number of concurrent requests, and an optional output file path.
  • The handler then uses an ApiTester to hit the API and measure response times.

Building the API Tester

The ApiTester class is where the magic happens—it sends the requests and calculates the average response time and the 95th percentile.

public class ApiTester
{
    private readonly HttpClient _httpClient;

    public ApiTester(HttpClient httpClient)
    {
        _httpClient = httpClient ?? throw new ArgumentNullException(nameof(httpClient));
    }

    public async Task<TestResult> TestPerformanceAsync(string url, int concurrent)
    {
        var tasks = new List<Task<TimeSpan>>();

        for (int i = 0; i < concurrent; i++)
        {
            tasks.Add(SendRequestAsync(url));
        }

        var results = await Task.WhenAll(tasks);
        return new TestResult
        {
            AverageResponseTime = results.Average(ts => ts.TotalMilliseconds),
            Percentile95 = results.OrderBy(ts => ts.TotalMilliseconds).ElementAt((int)(results.Length * 0.95)).TotalMilliseconds
        };
    }

    private async Task<TimeSpan> SendRequestAsync(string url)
    {
        var stopwatch = Stopwatch.StartNew();
        await _httpClient.GetAsync(url);
        stopwatch.Stop();
        return stopwatch.Elapsed;
    }
}
Enter fullscreen mode Exit fullscreen mode

Let’s break it down:

  • The ApiTester sends concurrent number of requests to the API.
  • It calculates the average response time and the 95th percentile, which are key metrics for performance testing.

Saving the Report

Once we’ve gathered the results, we might want to save them for later. The ReportWriter class handles that.

public class ReportWriter
{
    public async Task SaveAsync(string filePath, TestResult result)
    {
        var lines = new List<string>
        {
            $"Average time: {result.AverageResponseTime} ms",
            $"95th percentile: {result.Percentile95} ms"
        };
        await File.WriteAllLinesAsync(filePath, lines);
    }
}
Enter fullscreen mode Exit fullscreen mode

Simple and to the point, the ReportWriter saves our test results to a file.

Creating the Compare Command
Now let’s tackle the compare command. This is where we’ll compare the results of two different test runs.

public static Command CreateCompareCommand()
{
    var command = new Command("compare", "Compares two performance reports")
    {
        new Argument<string>("file1", "First report file"),
        new Argument<string>("file2", "Second report file")
    };

    command.Handler = CommandHandler.Create<string, string>((file1, file2) =>
    {
        var reportLoader = new ReportLoader();
        var report1 = reportLoader.Load(file1);
        var report2 = reportLoader.Load(file2);

        var comparer = new ReportComparer();
        var comparisonResult = comparer.Compare(report1, report2);

        Console.WriteLine(comparisonResult);
    });

    return command;
}

Enter fullscreen mode Exit fullscreen mode

Here’s what’s happening:

  • The compare command reads two report files and uses a ReportComparer to see how they stack up against each other.

Loading and Comparing Reports

Let’s see how we load and compare these reports.

public class ReportLoader
{
    public TestResult Load(string filePath)
    {
        var lines = File.ReadAllLines(filePath);

        double average = double.Parse(lines.First(line => line.StartsWith("Average time:")).Split(':')[1].Trim().Replace("ms", ""));
        double p95 = double.Parse(lines.First(line => line.StartsWith("95th percentile:")).Split(':')[1].Trim().Replace("ms", ""));

        return new TestResult
        {
            AverageResponseTime = average,
            Percentile95 = p95
        };
    }
}

public class ReportComparer
{
    public string Compare(TestResult report1, TestResult report2)
    {
        var averageDiff = report2.AverageResponseTime - report1.AverageResponseTime;
        var p95Diff = report2.Percentile95 - report1.Percentile95;

        return $"Comparison:\n" +
               $"Average time difference: {averageDiff} ms\n" +
               $"{(averageDiff < 0 ? "The second test was faster on average." : averageDiff > 0 ? "The second test was slower on average." : "The average response times were the same.")}\n" +
               $"95th percentile difference: {p95Diff} ms\n" +
               $"{(p95Diff < 0 ? "The second test had a better (lower) 95th percentile." : p95Diff > 0 ? "The second test had a worse (higher) 95th percentile." : "The 95th percentile times were the same.")}";
    }
}

Enter fullscreen mode Exit fullscreen mode

In summary:

  • ReportLoader pulls the performance metrics from a file.
  • ReportComparer compares the metrics and provides a human-readable summary of how the two reports differ.

Step 3: Running the Tool

Now that our CLI tool is complete, it’s time to put it to the test!

  1. Testing API Performance

To run a performance test on your API, use the test command:

dotnet run -- test https://api.example.com/endpoint --concurrent 100 --output report1.txt
Enter fullscreen mode Exit fullscreen mode

This sends 100 concurrent requests to https://api.example.com/endpoint and saves the results to report1.txt.

  1. Comparing Two Reports

After running tests at different times, compare the results using the compare command:

dotnet run -- compare report1.txt report2.txt

Enter fullscreen mode Exit fullscreen mode

This command will tell you whether your changes made the API faster or slower, and by how much.

Conclusion

And there you have it! A clean, SOLID, and fully functional CLI tool for analyzing API performance. By breaking down the functionality into small, focused classes, we’ve made our tool not only easy to use but also easy to maintain and extend.

Feel free to add more features, like supporting different HTTP methods or output formats. The sky’s the limit!

So go ahead, give it a whirl, and keep those APIs running fast and smooth. Happy coding! 🎉

If you want to access the example of this code, just click here: Code

Top comments (0)