DEV Community

Cover image for Why We Use Containers In DevOps
Milecia
Milecia

Posted on

Why We Use Containers In DevOps

At some point we've all said the words, "But it works on my machine." It usually happens during testing or when you're trying to get a new project set up. Sometimes it happens when you pull down changes from an updated branch.

Every machine has different underlying states depending on the operating system, other installed programs, and permissions. Getting a project to run locally could take hours or even days because of weird system issues.

The worst part is that this can also happen in production. If the server is configured differently than what you're running locally, your changes might not work as you expect and cause problems for users. There's a way around all of these common issues using containers.

What is a container

A container is a piece of software that packages code and its dependencies so that the application can run consistently in any computing environment. They basically create a little unit that you can put on any operating system and reliably and consistently run the application. You don't have to worry about any of those underlying system issues creeping in later.

Although containers were already used in Linux for years, they became more popular in recent years. Most of the time when people are talking about containers, they're referring to Docker containers. These containers are built from images that include all of the dependencies needed to run an application.

When you think of containers, virtual machines might also come to mind. They are very similar, but the big difference is that containers virtualize the operating system instead of the hardware. That's what makes them so easy to run on all of the operating systems consistently.

What containers have to do with DevOps

Since we know how odd happenings occur when you move code from one computing environment to another, this is also a common issue with moving code to the different environments in our DevOps process. You don't want to have to deal with system differences between staging and production. That would require more work than it should.

Once you have an artifact built, you should be able to use it in any environment from local to production. That's the reason we use containers in DevOps. It's also invaluable when you're working with microservices. Docker containers used with something like Kubernetes will make it easier for you to handle larger systems with more moving pieces.

Working with containers

Making a container in Docker means that you're starting with a Docker image. As an example, say you want to deploy a React app in a Docker container. You know that you need a specific version of node and you know where all of your files are. So you just have to write the image so that Docker can utilize these things.

When you see Docker images, it looks like they're written similar to bash commands but it's not the same. Think of images as step by step instructions that Docker needs to make your files correctly. Here's an example of a Docker container that builds a React app.

# pull the official base image
FROM node:13.12.0-alpine

# set the working directory
WORKDIR /app

# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH

# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts@3.4.1 -g --silent

# add app
COPY . ./

# start app
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

These are all instructions that you're probably used to running locally or executing in some other environment. Now you're doing the same thing, just in an environment "bubble". It'll use the version of node specified in the image and it'll use the directories specified in the image. You won't have to check and change all of these things whenever you need to run the code.

Including containers in your CI/CD pipeline

Here's an example of a container being built and run using Conducto as the CI/CD tool. This is running the Docker image we created above.

import conducto as co
def cicd() -> co.Serial:
    image = co.Image("node:current-alpine", copy_dir=".")
    make_container_node = co.Exec("docker build --tag randomdemo:1.0 .")
    run_container_node = co.Exec("docker run --publish 8000:8080 --detach --name rd randomdemo:1.0")

    pipeline = co.Serial(image=image, same_container=co.SameContainer.NEW)
    pipeline["Build Docker image"] = make_container_node
    pipeline["Run Docker image"] = run_container_node
    
return pipeline
if __name__ == "__main__":
  co.main(default=cicd)
Enter fullscreen mode Exit fullscreen mode

Other considerations

Remember that if you're still working with a monolith, it's going to take some time to get CI/CD ready. One small bug can take down the entire application and unit tests aren't the easiest things to write in this situation. So don’t get too down if the upfront investment seems like a lot because it can be.

The payoff is that later on, everything will run so seamlessly that you forget DevOps is in place until something breaks. That's way better than dealing with angry support calls, stressing about deployments, and decreasing customer trust.


Make sure you follow me on Twitter because I post about stuff like this and other tech topics all the time!

If you’re wondering which tool you should check out first, try Conducto for your CI/CD pipeline. It’s pretty easy to get up and running and it’s even easier to debug while it’s running live.

Top comments (0)