DEV Community

Cover image for Taming the Testing With AWS

Taming the Testing With AWS

As a software developer in test, one of the most challenging tasks I faced was ensuring the seamless integration of multiple microservices in a cloud-based environment. The project involved a complex web application built on microservices architecture, deployed on AWS. These microservices communicated with each other through RESTful APIs. Testing these APIs manually was not only time-consuming but also prone to errors, leading to significant delays in the release cycle. To tackle this challenge, I had to automate the testing process effectively, combining the power of AWS, API testing tools, and Selenium for end-to-end testing.

Smoke Testing APIs with AWS Canaries: A Smooth Approach

In a recent project, I faced the challenge of ensuring that a set of APIs powering a microservices-based application was always functional and reliable. Given the complexity and the critical role these APIs played, a failure could have far-reaching consequences. To catch potential issues early, I decided to implement smoke testing using AWS Canaries.

The Challenge

The primary challenge was maintaining confidence that the APIs were operational and performing as expected after each deployment. With frequent updates and deployments, relying solely on manual testing or scheduled test suites wasn’t feasible. We needed a way to continuously monitor the APIs, ensuring that any critical issues were detected as soon as they occurred.

The Solution

AWS Canaries, part of the Amazon CloudWatch Synthetics, emerged as the ideal solution for this problem. Canaries are scripts that run on a schedule, simulating the actions of a user or service. They are perfect for performing smoke tests, which are quick, high-level tests to check the basic functionality of the APIs.

Here’s how I implemented it:

  • Creating the Canaries:

I started by writing Canary scripts in Python using AWS CloudWatch Synthetics. These scripts made HTTP requests to the API endpoints, verifying that the responses were as expected. The scripts were designed to check the most critical API functions, like retrieving data, performing searches, and handling user authentication.

  • Setting Up Alerts:

After deploying the Canaries, I configured CloudWatch Alarms to trigger notifications if any of the Canary tests failed. This setup ensured that the team was immediately alerted to potential issues, allowing for rapid investigation and resolution.

  • Scheduling and Continuous Monitoring:

I scheduled the Canaries to run at regular intervals, such as every 5 or 15 minutes, depending on the criticality of the API. This continuous monitoring provided real-time insights into the health of the APIs and allowed us to detect issues before they could impact users.

Additional Links

A step by step tutorial can be found below

The following DevTo post provides a step by step tutorial to set up canaries to do smoke testing
Smoke Testing using AWS Canaries


Automating Testing with AWS, Selenium, and Java: Overcoming the Reporting Challenge

In one of my projects, I was tasked with testing a complex web application deployed on AWS. The application had multiple components, each interacting with different APIs and databases, making it essential to ensure that every piece functioned correctly before release. Given the scale and complexity, manual testing was quickly becoming unmanageable, and automation was the only way forward. However, the real challenge lay not just in automating the tests but also in generating comprehensive and clear test reports that the development team could easily understand and act upon.

The Challenge

The project was built using microservices, each hosted on AWS, with a mix of RESTful APIs and a dynamic user interface. Our team had chosen Selenium for automating the UI tests and Java for writing the test scripts, but as the application grew, so did the number of test cases. Running these tests manually was no longer feasible, and the lack of clear, detailed reporting made it difficult to track test results, identify issues, and communicate them effectively to the development team.

The existing test reports were basic, often limited to console outputs or text files that were hard to decipher without deep technical knowledge. As a result, developers spent too much time trying to understand test failures, which slowed down the bug-fixing process and hampered the overall efficiency of the project.

The Solution

To overcome these challenges, I decided to revamp our entire test automation framework, focusing not just on running tests but also on generating meaningful, detailed reports.

  • Building the Test Automation Framework:

I started by setting up a robust test automation framework using Selenium WebDriver with Java. TestNG was chosen as the testing framework due to its ability to run tests in parallel, generate HTML reports, and integrate easily with other tools. The framework was designed to be modular, with reusable components that could handle the different parts of the application.

  • Integrating with AWS

Given that our application was deployed on AWS, I integrated the test automation with AWS services for better scalability and management. We used AWS CodeBuild to run the tests in a CI/CD pipeline, ensuring that tests were automatically triggered on every code commit. This integration allowed us to run tests in parallel across different environments, significantly reducing the time required for test execution

  • Enhancing Test Reporting and Storing them in S3

The real game-changer was the implementation of an enhanced reporting system. I integrated the ExtentReports library with our TestNG framework. ExtentReports provided detailed HTML reports with visual representations of the test results, including screenshots for failed tests, detailed logs, and an intuitive interface that made it easy for developers to navigate through the results.

Additionally, I set up Allure Reports, another reporting tool that offered even more detailed insights. Allure Reports were particularly useful for their interactive dashboards and the ability to track test case histories over time, which helped in identifying patterns in recurring issues.

The reports were stored in S3 with unique identifiers (based on timestamp and test suite names) to ensure easy retrieval and organization.

The S3 bucket was configured with appropriate permissions to ensure that only authorized team members could access the reports. Additionally, S3 versioning was enabled to maintain a history of all test reports, which was useful for tracking the evolution of test results over time.

  • Sending Notifications with SNS:

Once the test reports were successfully uploaded to S3, the next step was to notify the team. For this, I used AWS SNS to send email notifications.

I set up an SNS topic and subscribed all relevant team members' email addresses to this topic. Then, using the AWS SDK, I wrote a script that triggered an SNS notification as soon as the reports were uploaded to S3.

The notification included a brief summary of the test results (e.g., the number of tests passed, failed, and skipped) and a direct link to the S3 bucket where the full reports could be accessed. This allowed the team to quickly review the results and take action if needed.

  • Integrating with CI/CD Pipeline:

With the new reporting system in place, I also integrated these reports into our CI/CD pipeline. AWS CodePipeline was configured to trigger the test suite, store the reports in S3, and send out the SNS notifications upon completion. This automation ensured that the entire process was hands-free and reliable, providing continuous feedback to the team. Developers could now quickly see which tests had failed, why they failed, and even view screenshots of the failure, all within minutes of the test run.

The Outcome

Implementing this solution significantly improved our workflow. The test reports were now stored in a centralized location, easily accessible to all team members at any time. The automated email notifications ensured that everyone was immediately aware of the test results, enabling faster response times to issues.

Top comments (0)