DEV Community

Cover image for The concept of "Monitoring Tests"
Stefano Magni
Stefano Magni

Posted on

The concept of "Monitoring Tests"

Small E2E tests that check little (but crucial) technical details.

I'm working on a big UI Testing Best Practices project on GitHub, I share this post to spread it and have direct feedback.


Some months ago I worked on the business.conio.com site, based on Gatsby. Apart from sharing some plugins I wrote, I wanted to improve the website performances as much as we could. Luckily, Gatsby stands out by self when speaking about performance but you know, it’s never enough.

I pushed the Conio’s DevOps (Hi Alessandro 👋) to leverage as much as he can the AWS/S3 capabilities to provide Brotli-compressed and forever-cached static assets to the website users. The result was a super-performant product but the road was not so easy because due to the bucket configuration, sometimes the Brotli compression was broken.

The error was subtle because the site uses a bunch of JavaScript features. The website seemed to work but if the compression was not set correctly, the contact form would have not worked. The symptom of the error was just an error printed in the console.

This kind of error is easily identifiable with an E2E test that checks if the form works or not (from the user’s perspective, obviously) but I could not rely on a test like this because the test suite was quite slow. After all, every E2E test is slow.
More: the DevOps needed frequent feedbacks, he changed the S3 configuration a lot of times and receiving feedback in seconds instead of minutes could be nice.

Since the check was pretty simple and I hate to test things manually, I wrote a simple test to keep the Brotli compression checked. I used Cypress and I wrote a test like the following one

// extract the main JS file from the source code of the page. I removed the regex matching part
const getMainJsUrl = pageSource => "/app-<example-hash>.js"

context("The Brotli-compressed assets should be served with the correct content encoding", () => {
  const test = url => {
    cy.request(url)
    .its("body")
    .then(getMainJsUrl) // retrieves the app.js URL
    .then(appUrl => cy.request({url: url + appUrl, headers: {"Accept-Encoding": "br"}})
    .its("headers.content-encoding")
    .should("equal", "br"))
  }

  it("staging", () => test(urls.staging))
  it("production", () => test(urls.production))
})
Enter fullscreen mode Exit fullscreen mode

Once written, I could provide a dedicated script that launched only this test (excluding all the standard E2E tests). Et voilà: I could keep monitored the Brotli compression with a super-fast test!

What about cache management? We faced some troubles with it too, I added some dedicated tests

const shouldNotBeCached = (xhr) => cy.wrap(xhr)
  .its("headers.cache-control")
  .should("equal", "public,max-age=0,must-revalidate")

const shouldBeCached = (xhr) => cy.wrap(xhr)
  .its("headers.cache-control")
  .should("equal", "public,max-age=31536000,immutable")

context('Site monitoring', () => {
  context('The HTML should not be cached', () => {
    const test = url =>
      cy.request(url)
        .then(shouldNotBeCached)

    it("staging", () => test(urls.staging))
    it("production", () => test(urls.production))
  })

  context('The static assets should be cached', () => {
    const test = url =>
      cy.request(url)
        .its("body")
        .then(getMainJsUrl)
        .then(appUrl => url+appUrl)
        .then(cy.request)
        .then(shouldBeCached)

    it('staging', () => test(urls.staging))
    it('production', () => test(urls.production))
  })
}
Enter fullscreen mode Exit fullscreen mode

I love these little tests because, in a few seconds, they keep checked something crucial for the user experience. I can sleep well, we are protected from these problems forever.

What else Monitoring Tests could check?

It’s way too much easy to make a big mess with a Gatsby configuration (with a lot of conditions and customizations for the different environments). The first, crucial, things to keep monitored are the easiest ones: the robots.txt and sitemap.xml files.

The robots.txt file must disallow the staging site crawling and allow the production one:

context('The robots.txt file should disallow the crawling of the staging site and allow the production one', () => {
  const test = (url, content) =>
    cy.request(`${url}/robots.txt`)
      .its("body")
      .should("contain", content)

  it('staging', () => test(urls.staging, "Disallow: /"))
  it('production', () => test(urls.production, "Allow: /"))
})
Enter fullscreen mode Exit fullscreen mode

while the sitemap.xml file, like the static assets, must not be cached:

context('The sitemap.xml should not be cached', () => {
  const test = url =>
    cy.request(`${url}/sitemap.xml`)
      .then(shouldNotBeCached)

  it('staging', () => test(urls.staging))
  it('production', () => test(urls.production))
})
Enter fullscreen mode Exit fullscreen mode

I wrote one more Monitoring Test because of an error that appeared because of a wrong build process: sometimes all the pages, except for the home page, contained the same content of the 404 page. The test is the following:

context('An internal page should not contain the same content of the 404 page', () => {
  const pageNotFoundContent = "Page not found"
  const test = url => {
    cy.request(`${url}/not-found-page`)
      .its("body")
      .should("contain", pageNotFoundContent)
    cy.request(`${url}/about`)
      .its("body")
      .should("not.contain", pageNotFoundContent)
  }

  it('staging', () => test(urls.staging))
  it('production', () => test(urls.production))
})
Enter fullscreen mode Exit fullscreen mode

Running them

I wrote the tests with Cypress and running them-only is super-easy if you name the file xxx.monitoring.test.js:

cypress run — spec \"cypress/integration/**/*.**monitoring**.*\"
Enter fullscreen mode Exit fullscreen mode

Why keeping them separated from the standard E2E tests?

Well, because:

  • monitoring tests are not written from the user perspective, E2E tests are. But with E2E tests I could have a “the contact form should work” failing test while, with monitoring tests, I could have a “brotli compression should work” failing test (more and more useful). I always prefer user-oriented tests but, when something fails frequently, I want to keep it checked
  • monitoring tests are inexpensive, E2E tests are not: E2E tests are super-slow, they can congest your pipeline queue and, based on how you implemented them, they can affect your analytics. That’s why I usually do not run them against the production environment but only against the staging one. The monitoring tests are run against both environments without cons

You can find all the tests in this gist of mine.

Top comments (0)