DEV Community

Alberto Fernandez Medina
Alberto Fernandez Medina

Posted on • Updated on • Originally published at

Testing an API Against its Documentation

This article covers how to run tests based on the API docs against an Express NodeJS API documented with API Blueprint using Dredd testing tool.

Note: This is the 5th post of a series of post about Building APIs With Express. Based on my last post about Documenting your API with API Blueprint I'll continue developing over the generated code.

So last time I documented the Another TODO API using API Blueprint and now I'm going to take advantage of that to have some test against the API to ensure documentation is up to date with the actual API code. For this task, I'm going to be using Dredd.


Dredd Logo

Dredd is a tool for testing APIs using their own documentation.

Installing Dredd Locally

To install Dredd for this use case it is needed to have installed:

Then on the terminal:

npm i -g dredd
Enter fullscreen mode Exit fullscreen mode

Now Dredd can be used as CLI tool.

Configuring Dredd

These Dredd guys are amazing to the point that the only thing needed to be done to start working with Dredd is by running the next line:

dredd init

? Location of the API description document docs/main.apib
? Command to start API backend server e.g. (bundle exec rails server) npm start
? URL of tested API endpoint
? Programming language of hooks nodejs
? Do you want to use Apiary test inspector? Yes
? Please enter Apiary API key or leave empty for anonymous reporter
? Dredd is best served with Continuous Integration. Create CircleCI config for Dredd? No

Configuration saved to dredd.yml

Run test now, with:

  $ dredd
Enter fullscreen mode Exit fullscreen mode

Some notes about what has been done here. Dredd has created a dredd.yml file at the root of the project with a bunch of properties based on the replies it has received.


dry-run: null  
hookfiles: null  
language: nodejs  
sandbox: false  
server: npm start # Command to start the API server  
server-wait: 3  
init: false  
  apiaryApiKey: ''
names: false  
only: []  
reporter: apiary  
output: []  
header: []  
sorted: false  
user: null  
inline-errors: false  
details: false  
method: []  
color: true  
level: info  
timestamp: false  
silent: false  
path: []  
hooks-worker-timeout: 5000  
hooks-worker-connect-timeout: 1500  
hooks-worker-connect-retry: 500  
hooks-worker-after-connect-wait: 100  
hooks-worker-term-timeout: 5000  
hooks-worker-term-retry: 500  
hooks-worker-handler-port: 61321  
config: ./dredd.yml # Source of Dredd config file  
blueprint: docs/main.apib # The API Blueprint file to get API definitions  
endpoint: '' # The base URL where the test will run
Enter fullscreen mode Exit fullscreen mode

I've commented the lines that I found most important for this step but all the info can be found in Dredd Configuration File Documentation.

Running Tests with Dredd

Now that the project has a config file and Dredd knows how to run the server this is the next command to execute (I think you already know):

Enter fullscreen mode Exit fullscreen mode

When executing the tests there will appear a report about what Dredd has found:

info: Configuration './dredd.yml' found, ignoring other arguments.  
warn: Apiary API Key or API Project Subdomain were not provided. Configure Dredd to be able to save test reports alongside your Apiary API project:  
info: Starting backend server process with command: npm start  
info: Waiting 3 seconds for backend server process to start


info: Beginning Dredd testing...  
GET /v1/tasks 200 13.427 ms - 1450  
fail: GET (200) /tasks duration: 58ms


info: Displaying failed tests...  
fail: GET (200) /tasks duration: 58ms  
fail: headers: Header 'content-type' has value 'application/json; charset=utf-8' instead of 'application/json'

method: GET  
uri: /tasks  
    User-Agent: Dredd/4.4.0 (Windows_NT 10.0.15063; x64)
    Content-Length: 0


    Content-Type: application/json

    "__v": 0,
    "updatedAt": "2017-01-05T17:53:37.066Z",
    "createdAt": "2017-01-05T17:53:37.066Z",
    "_id": "586e88217106b038d820a54e",
    "isDone": false,
    "description": "test"
statusCode: 200

statusCode: 200  
    x-powered-by: Express
    content-type: application/json; charset=utf-8
    content-length: 1450
    etag: W/"5aa-Oh/N4fD/Is1M3QO9MzB/QQaYxDU"
    date: Fri, 01 Sep 2017 15:36:43 GMT
    connection: close

[{"_id":"59a2fe039c2adf0e90acca12","updatedAt":"2017-08-27T17:14:43.564Z","createdAt":"2017-08-27T17:14:43.564Z","__v":0,"isDone":false,"description":"Buy milk"},{"_id":"59a2fe0f852c331148011df3","updatedAt":"2017-0


complete: 0 passing, 6 failing, 0 errors, 0 skipped, 6 total  
complete: Tests took 815ms  
DELETE /v1/tasks/586e88337106b038d820a54f 404 1.128 ms - 539  
complete: See results in Apiary at:  
info: Backend server process exited
Enter fullscreen mode Exit fullscreen mode

Also, at the end, if the dredd config file has the reporter as apiary, there will be a link (similar to to this page:

Apiary Dredd Test

Note: The provided link is a temporary page and will be removed after a while.

In this panel is a lot of info about how the tests did go. Another TODO API has some errors in the docs, one of them is the definition of content-type. Let's fix that and run the tests again.

After the changes and running dredd this is the new report:

Apiary Dredd Test

This time some of the endpoints have been validated but not all. The endpoints that require a task ID to work are returning 404 responses and causing the test to fail. this is because the task IDs specified in the API docs are only exposed as an example and doesn't really exists in the DB. Here is when Dredd hooks come handy.

Dredd Hooks

The hooks allow executing some code between, before or after each test case. This time I'm going to use one hook to get the ID of the task created on the "Create a New Task" definition to use that created task for the tests that need a taskId to work.


// Import the hooks library to work with them (injected by dredd)
const hooks = require('hooks')  
// Create some shorthand functions for the hooks
const after = hooks.after  
const before = hooks.before

// Because the action is going to be the same in all the hooks lets create a function
const replaceUrlForCreatedTaskId = function (transaction) {  
  // Gets the taskId from the response object
  let taskId = JSON.parse(responseStash['Tasks > Tasks Collection > Create a New Task'].body)._id
  // Gets the predefined request url
  let url = transaction.fullPath

  // Replaces the wrong taskId with the correct one
  transaction.fullPath = url.replace('586e88337106b038d820a54f', taskId)

// Instantiates an object to store the responses
let responseStash = {}

// Sets a hook to be executed after creating a task to store the response
after('Tasks > Tasks Collection > Create a New Task', function (transaction) {  
  // Stores the response inside the temporary object
  responseStash[] = transaction.real

// Sets hooks before the requests are made to replace the URLs
before('Tasks > Task > View a Task', replaceUrlForCreatedTaskId)  
before('Tasks > Task > Edit a whole Task', replaceUrlForCreatedTaskId)  
before('Tasks > Task > Edit a Task partially', replaceUrlForCreatedTaskId)  
before('Tasks > Task > Delete a Task', replaceUrlForCreatedTaskId)
Enter fullscreen mode Exit fullscreen mode

After setting the hooks the dredd.yml file needs to be modified.


dry-run: null  
hookfiles: ./docs/hooks.js # Here, we are telling dredd where are the hooks files  
language: nodejs  
sandbox: false  
Enter fullscreen mode Exit fullscreen mode

Now running the tests again:

info: Displaying failed tests...  
fail: PATCH (200) /tasks/586e88337106b038d820a54f duration: 11ms  
fail: body: Can't validate. Expected body Content-Type is application/json; charset=utf-8 but body is not a parseable JSON: Parse error on line 1:  
+ Attributes (Task)
Expecting 'STRING', 'NUMBER', 'NULL', 'TRUE', 'FALSE', '{', '[', got 'undefined'
Enter fullscreen mode Exit fullscreen mode

It is complaining about the line 118 of the main.apib file:

+ Response 200 (application/json; charset=utf-8)

        + Attributes (Task)
Enter fullscreen mode Exit fullscreen mode

There is being used a data structure for the response field but it's indented by 8 spaces , and for API Blueprint documents that means a block of code, so by reducing it to 4 and running the tests again:

info: Beginning Dredd testing...  
info: Found Hookfiles: 0=E:\develop\another-todo-api\docs\hooks.js  
GET /v1/tasks 200 14.604 ms - 5636  
pass: GET (200) /tasks duration: 69ms  
POST /v1/tasks 201 26.640 ms - 160  
pass: POST (201) /tasks duration: 48ms  
GET /v1/tasks/59a9a413bfa907076857eae2 200 4.018 ms - 160  
pass: GET (200) /tasks/586e88337106b038d820a54f duration: 110ms  
PUT /v1/tasks/59a9a413bfa907076857eae2 200 7.289 ms - 159  
pass: PUT (200) /tasks/586e88337106b038d820a54f duration: 21ms  
pass: PATCH (200) /tasks/586e88337106b038d820a54f duration: 15ms  
PATCH /v1/tasks/59a9a413bfa907076857eae2 200 2.659 ms - 164  
pass: DELETE (204) /tasks/586e88337106b038d820a54f duration: 30ms  
complete: 6 passing, 0 failing, 0 errors, 0 skipped, 6 total  
complete: Tests took 579ms  
DELETE /v1/tasks/59a9a413bfa907076857eae2 204 3.519 ms - -  
complete: See results in Apiary at:  
info: Backend server process exited
Enter fullscreen mode Exit fullscreen mode

Smooth like butter

Clean Dredd Test Report

NPM Test Script

Until now I've been using Dredd from my global installation but it's a better idea to include it as a dev dependency and create an npm test script.

npm i -D dredd
Enter fullscreen mode Exit fullscreen mode


  "scripts": {
    "lint": "eslint **/*.js",
    "start": "set DEBUG=another-todo:* && node bin/www",
    "test": "dredd"
Enter fullscreen mode Exit fullscreen mode


Dredd is a good tool to maintain your API Doc updated and make DDD (Documentation Driven Development).

Anyway, you can check the generated code on GitHub.

Happy coding <3!

Top comments (9)

jj profile image
Juan Julián Merelo Guervós

You've got to love the name of the tool...

lschultebraucks profile image
Lasse Schultebraucks

Wow, very cool. I actually read the test from the API and write tests with standard unit testing libraries if I want to learn about an API. Never heard about DDD, it seems like dredd can be extremely in that regard. Thank you!

albertofdzm profile image
Alberto Fernandez Medina

Thanks for your comment!

I think that DDD could be a good way to design APIs before beign developed and ensure they works as the designs says. That is one of the problems in my actual job. Backend designs the API Docs for new functionalities and all the teams discuss them and request changes and additions but there are some times that the development doesn't fullfil the designs or the designs doesn't gets updated. This would solve those problems.

Sorry for the long comment.

honzajavorek profile image
Honza Javorek

Dredd is more of a RDD tool (, it won't completely replace all your unit tests. But otherwise it exists to solve exactly the problem you described. It makes sure the docs users are presented with are always correct, and that the API is always correct according to the design.

Thread Thread
albertofdzm profile image
Alberto Fernandez Medina

Nice article!

I don't think RDD would be a good practice in this case, as the author says:

By restricting your design documentation to a single file that is intended to be read as an introduction to your software

It may be overwhelming to have a README with the whole API specification, what if the user only want to read only an introduction to the API or some small examples, maybe read the contribution section or some other info that readme files should include.

It's true that the more documents you have the worst you will maintain them updated, but in my opinion, is better if you have a README file that refers to your API docs and offers some friendly introduction and examples, maybe a FAQ and some project info.

Thread Thread
honzajavorek profile image
Honza Javorek

The API description document (API Blueprint, Swagger, OpenAPI) usually describes the API in form of some expected happy scenarios. Usually, it doesn't contain examples for all the corner cases. In this sense, it is something like README for a software project. It helps user to understand how to use the API, but it doesn't replace the reference book. For that reason, if you test your API against your API description by Dredd, you get very very very useful thing - an assurance that what users read in the docs is exactly how the API works. But you don't get all the negative scenarios and all the corner cases, so you still need to add some unit/integration tests. That's why I think this is more of a RDD than DDD, but anyway, this is pretty much bike shedding :D

I had a talk on this topic on PyCon SK, maybe I'm clearer in expressing my thoughs there -

I completely agree an actual README should not be equal to an API description.

And by the way, thanks for a great article. We should definitely link it from the Dredd docs! It's better than our own tutorials. It's amazing to see when people find Dredd so useful they even write articles about it.

Thread Thread
albertofdzm profile image
Alberto Fernandez Medina

Thank you very much!

Haha, I didn't know you were one of the main contributors to Dredd! Thanks a lot for work on tools like this!

I saw part of the presentation, I didn't know such tools like doctests exists! This is amazing.

Thanks for your comments and contributions.

newswim profile image
Dan Minshew • Edited

It looks like that first link is broken.

Correct link -

I really enjoyed this article!

albertofdzm profile image
Alberto Fernandez Medina

Thanks! Fixed it.

Glad you like it.