DEV Community

Cover image for What have we learned now that everything is in containers?
Ed LeGault for Leading EDJE

Posted on

What have we learned now that everything is in containers?

Now that the concept of containerization has been around for quite a few years we can take a step back and look at the lessons learned from turning everything into containers. You will be hard pressed to find an environment that doesn't either package the resulting build artifact as an image or at least use build agents that run steps inside of containers. Let's take a look at how this new, and now current, world is shaping up and what we can learn from it.

What has gone well?

  • Building something once and running it everywhere has caused consistent test results because if it worked in one environment it should work in another. Right? Right? Well, that is the goal. That reminds me of a key tenant of CI/CD and containerization. Don't rebuild the image per environment. Re-tag, don't rebuild. As long as you are doing that then I would say running the same image in production that you tested in lower environments is a good thing. "It worked on my machine"
  • You can change out the entire internal working of the application and it will still run in a consistent manner. This is cool. You can change your application from a spring app running in a tomcat based image to spring boot and it will still start, stop and run in a consistent manner as a container. As long as your application adheres to the API contract, ports and monitoring requirements then the tools that run and monitor it don't really care what it is made up of on the inside.
  • Robust runtime orchestration tools can provide scaling, monitoring, healing and automated deployment possibilities because they are built to handle containers. The most famous example is kubernetes. You can scale your application along with performing rolling updates of upgrades just by utilizing the tooling provided. That is, as long as it is dealing with a containerized application of course. This is a plus, but I have a feeling kubernetes is going to come up again in another category.

What has not gone well?

  • The learning curve of application developers knowing more about the infrastructure and packaging of their application. Producing an image as a deployment artifact moves the responsibility of knowing about the runtime infrastructure to the development team when they might have had no previous knowledge of what or how their application was running. This could be seen as a good thing once this knowledge gap has been overcome. However, there are many organizations where the developers "just write code and are paid to pump out features". Although this line of thinking is a DevOps anti-pattern, it is a necessary evil in the IT world that we live in.
  • Applications need to ensure that they are taking in configuration values as environment variables. There are many times that applications are built and packaged with dev, qa and prod configuration files bundled in the image. This needs to be avoided in favor of environment variables and, in most cases, requires application code to be changed to accommodate this.
  • The learning curve for orchestration templating options such as docker-compose or if using kubernetes things like helm or kustomize. To orchestrate the application it usually requires defining resources in YAML files that require some values to be different per environment. This requires the need for templating those YAML resources. The use of these tools requires learning new concepts and software to fully utilize them to their potential.

What are the lessons learned?

  • Using containers and containerization concepts does not mean Docker. There are alternatives to using Docker such as podman (https://podman.io/)
  • The complexity of orchestration options are leading to some developers becoming subject matter experts in that particular software. Again, the best example of this is kubernetes (I told you it was coming up again). Fully utilizing kubernetes to its potential is an artform in of itself and in most cases requires a person, or team of people, to handle the complexity of how to properly scale, roll-out or roll-back the application being deployed. This can be seen as going back to the "throw it over the wall" problem between dev and ops with the difference being that the wall has moved. Instead it is important to have developers understand that their application is running as a container in kubernetes and what that means. In most cases that just means that they have to have a liveness and readiness endpoint available and to know if anything they changed is going to impact what the current memory settings are. They don't have to be kubernetes experts but they should be aware of what the impact of their changes are to the orchestration that runs the application.
  • Cool tools can mask something that is wrong with the application. Automatic scaling, healing, restarting the application when it goes down, etc are all awesome tools. However, someone needs to go back and look at why the application crashes every day around the same time or why it gets restarted every two days. Just because the tools recover from the problem doesn't mean that there really isn't a problem.

Conclusion

Overall containerization has proven itself to be useful and valuable. I think overall the pros far outweigh the cons. Containers are also not always the right tool for the job and sometimes solutions such as serverless are the answer. However, this greatly depends on the requirements of the problem you are trying to solve. It also makes me wonder if I will be retired by the time someone gets hired to convert applications that I have containerized into the next thing, whatever that is.


Smart EDJE Image

Top comments (0)