DEV Community

Cover image for Mocking our development and testing infrastructures with Docker
Dailos Rafael Díaz Lara
Dailos Rafael Díaz Lara

Posted on

Mocking our development and testing infrastructures with Docker

🇪🇸 Versión en español

🎯 Context

When we are creating a new application or feature, we usually need to send requests to independent resources such as databases or mocked services, but it's obvious that running these kind of actions against deployed servers have a cost.

On these kind of situations is where the isolation of systems provided by Docker containers becomes really useful.

In this post we are going to see how to use Docker for getting up the minimum infrastructure which allows us to run the development and/or testing tasks... locally.

The main target of this text is to show how to user a single docker-compose.yml file for both environments, using different .env files to customize the specific container for every one, development and testing.

In addition, we will focus on how to start up the new container for testing purposes, execute the tests and then, shut down the container.

💻 System configuration

If we are going to be talking about Docker, it's obvious that we need to have it already installed in our system. If you don't have it yet, you can follow the official documentation instruction for your specific operative system.

Another element that we are going to need is docker-compose. Once again, if you have not installed it yet, you can follow the official documentation.

Finally, due to this example is aimed to applications development based on JavaScript/TypeScript, we need to have installed NodeJS (official documentation).

🏗 Project initialization

🔥 If you have already started your NodeJS project, you can skip this section 🔥

We are going to initialize our NodeJS project opening a CLI, in the folder where we want to work, and typing the next command:

npm init -y
Enter fullscreen mode Exit fullscreen mode

This action will create a single package.json file in the root of our project, with the next content:

Now we can install Jest running the next command in our CLI, in order to include this library in the project:

npm i -D jest
Enter fullscreen mode Exit fullscreen mode

The next step is to create the most basic folder structure for the project.

/
|-- /docker # <= New subfolder.
|-- /node_modules
|-- /src # <= New subfolder.
|-- package-lock.json
|-- package.json
Enter fullscreen mode Exit fullscreen mode

🐳 Setting up the Docker configuration

We are going to have two main environments (development and test) and the main idea is to have a single docker-compose.yml file in order to manage both environment containers.

📄 docker-compose.yml file definition

In order to reach that goal, inside the /docker folder we are going to create our single docker-compose.yml file which will contains the next code:

As we can see, there are several lines marked as coupling smell. It means that, with the current configuration, we can only run a single Docker container mainly aimed for development tasks. So we are strongly coupled to this environment.

Wouldn't it be fine whether we were able to replace those hardcoded configurations by references and that those references are defined by any kind of configuration file?

.env files for Docker containers

Yes!!! We can use .env files in the same way we do that for our applications, but for configuring Docker containers.

First at all, we need to edit the docker-compose.yml file we created recently in order to use curly-braces templates to define the constant names which will be replaced with the value defined in our .env files. This way, the docker-compose.yml file content will be defined this way:

As we can see, we have replaced the hardcoded values by ${CONSTANT_NAME} references. The name typed between curly braces will be the name of the values defined into our .env files. This way, when we run the docker-compose command, using some special CLI options that we will see later, the .env file content will be replaced into our docker-compose.yml file before creating the Docker container.

Now it's time to define our environments so we are edit the /docker folder content this way:

/
|-- /docker
|   |-- /dev
|   |   |-- .docker.dev.env
|   |-- /test
|   |   |-- .docker.test.env
|   |-- docker-compose.yml
|-- /node_modules
|-- /src
|-- package-lock.json
|-- package.json
Enter fullscreen mode Exit fullscreen mode

For every environment, we have created a single subfolder: dev and test.

Into every environment subfolder we have created a specific .env file: .docker.dev.env and .docker.test.env.

🙋❓ Could it be possible just naming the environment files as .env?

Yes, it could and besides, there wouldn't be any issue with it but... a so descriptive file name is a kindly help for us as developers. Due to in the same project it's really likely there are multiple configuration files, it's useful to be able to differentiate between then when we have several ones open, at the same time, in the code editor. That is the reason why the .env files have a so descriptive names.

Now it's time to define the content of our environment files this way:

and...

There are four properties which you must pay attention in order to differentiate both files:

  • CONTAINER_NAME
  • EXTERNAL_PORT
  • VOLUME_NAME
  • CONFIGURATION_PATH

The CONTAINER_NAME property will define the name that we will see after the container is created and we run the command docker ps -a in order to list the whole containers in our system.

EXTERNAL_PORT is a really sensitive property due to it will define the connection port published by the container through which our application will connect with it. It's really important to be careful with this parameter because some times we will want to run the testing suite at the same time we have up the application in development mode, so if we define the same port for both containers, the system will throw an error because the selected port is already in use.

The VOLUME_NAME property will define the data storage name in our system.

Finally, in case we have defined any kind of data to prepopulate the database before using it, the CONFIGURATION_PATH property will allow us to define where that set of data is located.

🙋‍♀️❓ Hey but, what about the COMPOSE_PROJECT_NAME property?

That's a great question.

Our main goal is to create a specific container per environment, based on the same docker-compose.yml file.

Right now, if we run our docker-compose for development, for instance, we will create the container with that environment definition and the docker-compose.yml file will be bound with that container.

This way, if we try to run the same file but setting the testing configuration, the final result will be an update of the previous development container, without the defined testing configuration. Why? Because the compose file is bound to the first started container.

In order to reach our target successfully, we use the COMPOSE_PROJECT_NAME property into every .env file and we set a different value depending on the environment.

This way, every time we run the compose file, due to the project name is different for every .env file, the modifications will only affect to the containers bound with every project name.

🙋❓ That's fine but we are using COMPOSE_PROJECT_NAME only into our .env files and not in the docker-compose.yml one. How is possible that it affect to the final result?

It's possible because that property is read directly by docker-compose command and it's not needed to be included into the docker-compose.yml file.

In this link you have the whole official documentation about COMPOSE_PROJECT_NAME.

🤹‍♂️ Populating the database

🔥 Caveat: The next explained process is aimed to populate a MongoDB database. If you want to use a different engine, you have to adapt this process and the docker-compose.yml configuration for it. 🔥

The most basic concept we must know, if we already don't, is that when a MongoDB based on container starts first time, the whole files with extension .sh or .js located into the container folder /docker-entrypoint-initdb.d are executed.

This situation provides us a way to initialize our database.

If you want to get deeper about it, you can find the whole information about it in this link of the MongoDB Docker image documentation.

🧪 Testing environment configuration

In order to see how we can do that, we are going to start by the testing environment so first at all, we have to create the next file structure into the /docker/test folder of our project:

/
|-- /docker
|   |-- /dev
|   |   |-- .docker.dev.env
|   |-- /test
|   |   |-- /configureDatabase # <= New subfolder and file.
|   |   |   |-- initDatabase.js
|   |   |-- .docker.test.env
|   |-- docker-compose.yml
|-- /node_modules
|-- /src
|-- package-lock.json
|-- package.json
Enter fullscreen mode Exit fullscreen mode

The content of the initDatabase.js file will be the next one:

This script is divided in three different elements.

The apiDatabases constant contains the whole databases definitions that we want to create for this container.

Every database definition will contain its name (dbName), an array of users (dbUsers) whose will be allowed to operate with the database (including their accessing privilege definitions) and the dataset which we will populate the database.

The createDatabaseUser function is focused on handle the information contained into every apiDatabases block, process the users data and create them into the specified database.

Finally the try/catch block contains the magic because in this block we iterate over the apiDatabases constant, switch between databases and process the information.

Once we have checked this code, if we remember our docker-compose.yml file content, into the volumes section we defined the next line:

- ${CONFIGURATION_PATH}:/docker-entrypoint-initdb.d:rw

In addition, for the testing environment, into the .docker.test.env file we set this configuration:

CONFIGURATION_PATH="./test/configureDatabase"

With this action, the docker-compose process is copying the content of the path defined by CONFIGURATION_PATH into the container /docker-entrypoint-initdb.d:rw before it's run first time. So we are setting our database configuration script to be executed in the container start up.

🙋‍♀️❓ For this configuration you are not setting any initial data. Why?

Because it will be the testing database so the intention is to persist and remove data ad-hoc based on the tests that are running in a specific moment. By this reason, it has not sense to populate this database with mocked information once we are going to create/edit/delete it dynamically.

🛠 Development environment configuration

This configuration is pretty similar to the testing one.

First at all, we have to modify the /docker/dev subfolder content in our project, in order to get this result:

/
|-- /docker
|   |-- /dev
|   |   |-- /configureDatabase # <= New subfolder and files.
|   |   |   |-- initDatabase.js
|   |   |   |-- postsDataToBePersisted.js
|   |   |   |-- usersDataToBePersisted.js
|   |   |-- .docker.dev.env
|   |-- /test
|   |   |-- /configureDatabase
|   |   |   |-- initDatabase.js
|   |   |-- .docker.test.env
|   |-- docker-compose.yml
|-- /node_modules
|-- /src
|-- package-lock.json
|-- package.json
Enter fullscreen mode Exit fullscreen mode

The postsDataToBePersisted.js and usersDataToBePersisted.js files only contain static data defined into independent constants. That information will be stored in the defined database, into the specified collection.

The structure for the content included into these files is like that:

In the other hand, the content of initDatabase.js file is pretty similar to the testing environment definition but a little bit complex due to we have to manage collections and data. So the final result is this one:

At this script there are several parts that we need to analyze.

The header block composed by two load() function calls which are used in order to import the mocked data constants declarations that we did in the other JavaScript files.

🔥 Pay attention to the full data location path is referenced to the inner Docker container file structure, not to our system. 🔥

ℹ️ If you want to learn more about how MongoDB executes JavaScript files in its console, take a look to the official documentation.

After "importing" the usersToBePersisted and postsToBePersisted constants definitions via load() function, they are globally available into the context of our initialization script.

The next block to be analyzed is the apiDatabases constant definition where besides the dbName and dbUsers that we covered in the testing configuration, in this case the dbData array is a little bit more complex.

Every object declared into the dbData array defines the collection name as well as the dataset that must be persisted in that collection.

Now we find the collections constant definition. It's a set of mapped functions (or object lookup) which contains the actions to execute for every collection defined into the apiDatabases.dbData block.

As we can see, in these functions we are directly invoking native MongoDB instructions.

The next function is createDatabaseUsers which has not differences with the defined for the testing environment.

Just before ending the script file we can find the populateDatabase function.

In this function we go through the database collections inserting the assigned data and here is where we invoke the collections mapped functions object.

Finally we have the try/catch block where we run the same actions that we did for the testing environment but we have included the populateDatabase function call.

This way is how we can configure the initialization script for our development environment database.

🧩 Docker Compose commands

Once we have defined the composing file as well as the dataset that will initialize our databases, we have to define the commands which will run our containers.

🔥 Pay attention to the used paths are referenced to our project root. 🔥

🌟 Setting the final NodeJS commands

The final step is to define the needed scripts into our package.json file.

In order to provide a better modularization of scripts, it's strongly recommended to divide the different scripts in atomic ones and then, create new ones which group the more specific ones.

For instance, in this code we have defined the dev_infra:up, dev_infra:down, test:run, test_infra:up and test_infra:down scripts which are atomic because define a single action to do and will be in charge to start and turning off the containers for every environment as well as to run the testing suite.

In opposite we have the build:dev and test scripts which are composed due to they include several atomic actions.

🤔 FAQ

What happen if the testing suite suddenly stops because any test fails?

Don't worry about that because it's true that the testing infrastructure will keep running but we have two options:

  1. To keep it running so the next time we run the testing suite, the docker-compose command will update the current container.
  2. To run manually the shutting down script for the testing container.

What happen whether instead of a database we need to run a more complex service like an API?

We just need to configure the needed containers/services into the docker-compose.yml file, paying special attention to the .env configurations for every environment.

It doesn't matter what we wrap and/or include in our container/s. The important point here is that we are going to be able to start and turning off them when our project needs it.

👋 Final words

With this configuration, we can include infrastructure management to our NodeJS based on project.

This kind of configuration provides us a decoupling level that will increase our independency during the development period, because we are going to treat the external elements to our code as a black box which we interact.

Another interesting point for this strategy is that every time we start up the container via docker-compose, it's totally renewed so we can be sure that our testing suites are going to be run in a completely clean system.

In addition, we will keep clean our system due to we don't need to install any auxiliar application on it because all of them will be included into the different containers that compose our mocked infrastructure.

Just a caveat, try to keep the content of the containers up-to-date in order to work with the closest production environment conditions as it's possible.

I hope this tip is useful for you. If you have any question, feel free to contact me. Here there are my Twitter, LinkedIn and Github profiles.

🙏 Credits and thanks

  • Jonatan Ramos for providing the clue of COMPOSE_PROJECT_NAME to create a single docker-compose.yml file shared between different environments.

Top comments (2)

Collapse
 
schonbrenner profile image
Schon Brenner • Edited

Using the c4model.com/ we used this technique to test our software systems under test. We used docker and a mock server to stubout and orchestrate container, system and domain dependencies depending on what kind of integration or isolated testing we wanted to do. We authored cucumber tests which orchestrated all of this through our build system. This allowed my development teams to pick the layer of testing that best fit the feature they were adding. It also allowed us to start our development with the end in mind by stubbing out expected system dependencies and work our way inward to containers implementations and behavioral tests until we reached our code and unit tests. This allowed us to pushed down our exceptional based behaviors to more isolated code / containers under test keeping tests which integrated many containers / systems to confirm happy path behavior to a minimum.

Collapse
 
ddialar profile image
Dailos Rafael Díaz Lara

I didn't know the C4 Model and it looks like really interesting. I'll take a look to it. Thanks a lot 😀