DEV Community

hyper
hyper

Posted on • Originally published at hyper63.Medium on

Intro to Integration Testing

Testing is a large subject and there are many opinions on how to do it well! Time for me to add my opinion, I agree with many on the importance of integration testing, especially with Web APIs. By testing your API from the outside, making a Web Request and either persisting to the database or mocking the persistence layer, can create high quality, reliable tests. I refer to these tests as integration tests. They test your server-side applications business rules by starting from the API Endpoint and navigating through your logic to external services and back to the Response.

In order to showcase this testing experience, let’s walk through an example implementation. In this example, we will implement the endpoint that manages the posting of a movie review for a movie review application.

Setup

mkdir movie-review-api
cd movie-review-api
yarn init -y
yarn add express node-fetch zod@next
yarn add -D tape fetch-mock @twilson63/test-server dotenv
Enter fullscreen mode Exit fullscreen mode

Or you can clone this repo — https://github.com/hyper63/integration-testing, which gets you started with the basic dependencies for this tutorial.

Testing Library

In this tutorial, we will be using the tape test library, it is small, robust and just works. tape provides a single function test that takes a string and unary function as arguments. The function provides an assertion helper object, this object has some basic assertion functions ok, equal, deepEqual and an end function. With any test, you go through the following steps.

  • setup
  • execution
  • assertion
  • teardown

With tape you can next test functions, if you like, or keep the steps self contained. I tend to keep everything self-contained so each test is independent and isolated.

Testing Helper Libraries

You will notice, we are using some helper libraries that will help us create our integration test workflow.

test-server, allows us to spin up an express server and make an http request to the server per test cycle.

import { default as test } from 'tape'
import testServer from '@twilson63/test-server'
import app from '../server'
import fetch from 'node-fetch'

test(async t => {
  // start server
  const server = testServer(app)
  // run test
  const result = await fetch(server.url).then(r => r.json())
  // do assertions
  t.ok(result.ok)
  // close server and end test
  server.close(() => t.end())
})
Enter fullscreen mode Exit fullscreen mode

fetch-mock, allows us to replace the fetch command and create mock response handlers so that we don’t have to load up a service in our test and staging environments.

import fetchMock from 'fetch-mock'

globalThis.fetch = fetchMock
  .get('https://play.hyper63.com/data/movie-reviews/1', {
    status: '200',
    body: { id: '1', title: 'Ghostbusters Review' }
  })
  .sandbox()
Enter fullscreen mode Exit fullscreen mode

Build our Integration Test

Lets write our tests before we write any code, this is called TDD or test driven development, it is a practice worth exploring, it can give you insight to how your code will be used by other developers and may help you refine some design decisions based on the way you test the experience.

  • Happy Path Testing

The happy path is the test that validates that the implementation takes the correct input and returns the correct output. You may have several happy path tests if your implementation logic contains a lot of branching logic. The point of the happy path test is to validate that your implementation does what you set out to do.

Input

POST /api/movie-reviews HTTP/1.1
Content-Type: application/json

{
  "title": "My Title",
  "body": "My review content",
  "rating": 4 
}
Enter fullscreen mode Exit fullscreen mode

Output

HTTP/1.1 201 Created
Content-Type: application/json

{
  "ok": true,
  "id": "1"
}
Enter fullscreen mode Exit fullscreen mode
  • Sad Path Testing

Sad path testing is to validate that your implementation code properly handles bad input or a service that responds with a non successful response. These test could be endless, try to be pragmatic and capture the most common occurrences. A good note, is that when bugs are reported for this specific implementation, create a sad path test that reproduces the bug, then fix the bug, this way you will improve coverage and create confidence that you are not introducing regressions over time. It will also help you keep your technical debt to a minimum, because as your refactor you will have more and more edge cases accounted for.

Happy Path Test

create new file: api/movie-reviews/index-test.js

import { default as test } from 'tape'
import postReviews from './index.js'
import testServer from '@twilson63/test-server'
import express from 'express'
import fetch from 'node-fetch'

globalThis.fetch = fetch

const app = express()

test('ok', async t => {
  app.post('/api/movie-reviews', express.json(), postReviews)
  const server = testServer(app)
  const result = await fetch(server.url + '/api/movie-reviews', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json'},
    body: JSON.stringify({ title: 'My First Review', body: '...', rating: 3})
  }).then(res => res.json())

  t.ok(result.ok)

  server.close()

})
Enter fullscreen mode Exit fullscreen mode

Write our implementation code

Lets test drive our way to success

open api/src/movie-reviews/index.js in our code editor

export default function (req, res) {

  res.json({})
}
Enter fullscreen mode Exit fullscreen mode

add implementation details:

  • validate data
import { z } from 'zod'

const Review = z.object({
  id: z.string().optional(),
  title: z.string(),
  body: z.string(),
  rating: z.number().max(5)
})

export default function (req, res) {
  const { success, data, error } = Review.safeParse(req.body)
  if (!success) { return res.status(500).json(error.issues) }
  ...

})
Enter fullscreen mode Exit fullscreen mode
  • submit to service
const result = await fetch(url + '/data/movie-reviews', { 
    method: 'POST',
    headers: { 'Content-Type': 'application/json'},
    body: JSON.stringify(data)
  }).then(res => {
    if (res.status !== 201) {
      return ({ok: false, status: res.status, msg: 'error with service'})
    }
    return res.json()
  }).catch(err => ({ok: false, status: 500, msg: err.message}))
Enter fullscreen mode Exit fullscreen mode
  • mock data
globalThis.fetch = fetchMock
  .post('https://play.hyper63.com/data/movie-reviews', {
    status: 201,
    body: { ok: true }
  })
  .sandbox()
Enter fullscreen mode Exit fullscreen mode

Test Sad Path

test('create movie review with bad doc', async t => {
  const app = express()
  app.post('/api/movie-reviews', express.json(), postReviews)
  const server = testServer(app)

  const result = await fetch(server.url + '/api/movie-reviews', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ tiitle: 'Foobar', body: '...' })
  }).then(res => res.json())

  console.log(result)

  t.notOk(result.ok)
})
Enter fullscreen mode Exit fullscreen mode

Exercises

  • Create Sad Path requesting a GET instead of POST
  • Create Sad Path with invalid ‘Content-Type’
  • Create Sad Path with no data
  • Create Sad Path where service is not available

Create a github action

Github actions are a great way to aways test your code.

mkdir -p .github/workflows
touch .github/workflows/test.yml
Enter fullscreen mode Exit fullscreen mode

open .github/workflows/test.yml

name: integration test
on: 
  push
jobs:
  build:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [14.x]
    steps:
      - uses: actions/checkout@v2
      - name: Use NodeJS ${{ matrix.node-version }}
        uses: actions/setup-node@v1
        with:
          node-version: ${{ matrix.node-version }}
      - run: yarn
      - run: yarn test
        env: 
          CI: true
Enter fullscreen mode Exit fullscreen mode

Summary

In this tutorial, we walked through the process of building a test for an API endpoint. By creating integration tests on our API endpoints, we have made or code more reliable and have provided our future self and other team members the ability to re-factor our implementation code over time by leveraging our tests. Having Tests that describe and validate the intent of the functionality have a much longer lasting lifetime, than tests that validate specific implementations.

Top comments (0)