DEV Community

Fahad Ali Khan
Fahad Ali Khan

Posted on

Testing Frameworks in Action: A Deep Dive into Automated Testing for EnglishFormatter

Automated testing is an essential part of any robust software development workflow. It ensures code quality, minimizes regressions, and provides confidence in the software’s behavior. In this blog, I’ll share my journey through Lab 7, which revolved around incorporating testing into my EnglishFormatter project. From selecting tools to mocking responses and uncovering bugs, here’s how it all unfolded.


1. Choosing the Testing Framework

For this project, I chose Google Test (GTest) and Google Mock (GMock) as my testing framework. These tools are the gold standard in the C++ world for writing unit tests and mocking dependencies.

  • Why GTest and GMock?

    • GTest provides a simple and powerful framework for writing tests with descriptive assertions.
    • GMock enables mocking of external dependencies, which is crucial for testing interactions with APIs or other services.
    • Both are well-documented and actively maintained, with a vibrant community for support.
  • Links:


2. Setting Up the Framework

Setting up GTest and GMock in my project involved the following steps:

Installing GTest and GMock

  • Linux:
  sudo apt-get install libgtest-dev
  cd /usr/src/gtest
  sudo cmake .
  sudo make
  sudo cp *.a /usr/lib
Enter fullscreen mode Exit fullscreen mode
  • Windows: I downloaded the source code from GitHub and built the libraries using CMake.

Adding Tests

  1. Created a tests.cpp file to house all test cases.
  2. Linked GTest and GMock during compilation:
   g++ -std=c++17 -I<gtest-include-path> -I<project-include-path> tests.cpp -o tests -lgtest -lgmock -lpthread
Enter fullscreen mode Exit fullscreen mode

Running Tests

After building, I ran the tests:

./tests
Enter fullscreen mode Exit fullscreen mode

The output showed which tests passed or failed, with detailed information for debugging.


3. Mocking LLM Responses

One of the challenges in this project was testing the interaction with an external Language Model (LLM) API. Since relying on a live API during testing is impractical, I mocked the responses using GMock.

Approach to Mocking

  • I created a MockApiClient class that inherited from api_client.
  • Using MOCK_METHOD, I defined mock behaviors for the make_api_call method.
  • Example:
  class MockApiClient : public api_client {
  public:
      MOCK_METHOD(std::string, make_api_call, (const std::string& prompt), (override));
  };

  TEST(MockApiClientTests, MockedApiResponse) {
      MockApiClient mockClient;
      EXPECT_CALL(mockClient, make_api_call)
          .WillOnce(::testing::Return(R"({"choices": [{"message": {"content": "Mock Response"}}]})"));

      std::string response = mockClient.make_api_call("Test Prompt");
      EXPECT_EQ(response, R"({"choices": [{"message": {"content": "Mock Response"}}]})");
  }
Enter fullscreen mode Exit fullscreen mode

This allowed me to simulate different API responses (e.g., success, errors) without making actual network calls.


4. Learning Through Test Case Writing

Writing tests was an enlightening experience, filled with both challenges and "aha!" moments. Here’s what I learned:

Lessons Learned

  • Testing Forces Better Code:
    Writing tests made me realize areas where my code was tightly coupled or lacked error handling. Refactoring for testability improved the overall design.

  • Mocking Simplifies Complex Testing:
    Mocking external dependencies like the LLM API allowed me to focus on testing my code rather than external services.

Challenges and "Aha!" Moments

  • Initially, setting up Google Mock felt intimidating due to its many features. However, once I understood the syntax, it became straightforward and incredibly powerful.
  • I realized the importance of edge cases—what happens when the API returns an unexpected JSON format? This led to more robust parsing logic.

5. Bugs and Edge Cases

Bugs Uncovered

  • Empty Environment Variables:
    A missing API_KEY or API_URL caused runtime errors. Adding unit tests for these scenarios led to meaningful error messages instead of crashes.

  • Invalid API Responses:
    Parsing logic failed on API responses missing the choices field. Tests caught this, prompting me to add error handling.

Edge Cases

  • Responses with empty or malformed JSON.
  • Unexpected data types in the JSON response (e.g., choices being a string instead of an array).

6. Reflections and Future Plans

This lab reinforced the value of testing in software development. Before this project, I had limited experience with formal testing frameworks. Now, I feel confident in my ability to write meaningful test cases and mock external dependencies.

Future Testing Plans

  • Continuous Integration: I plan to integrate automated testing into a CI/CD pipeline using GitHub Actions.
  • Code Coverage: Analyzing code coverage will ensure all critical paths are tested.
  • More Mocks: Expanding mocks to simulate other parts of the system.

Conclusion

Automated testing is not just about catching bugs—it's a mindset shift that promotes better design and maintainability. Tools like GTest and GMock make testing in C++ approachable and effective. This lab was a fantastic opportunity to sharpen my testing skills, and I’m excited to apply these practices to future projects.

If you're working on a project and haven't tried automated testing yet, I encourage you to start. Your future self (and your users) will thank you!


Your Thoughts?

Have you used Google Test or similar frameworks in your projects? What challenges have you faced in testing? I’d love to hear about your experiences!

Top comments (0)