The article part is originally published on 'HackerNoon'
Every company is becoming a software company these days, and there is so much happening around making software development occur at record speeds.
In today's cloud market, there are many DevOps tools and methodologies that are emerging every day. People have so many options to choose from that competition has reached its peak, which in turn has put pressure on these software firms to constantly deliver products and services even better than their competitors.
As the cloud approach is intensely gaining in popularity, many firms are starting to embrace cloud practices and concepts like containerization, meaning DevOps tools like Docker are in high demand. In this article, we are going to see some facts related to Docker that are useful for developers and architects.
Long ago, before the introduction of Docker and containers, big firms would go and buy many servers to make sure their services and business didn’t go down. This process usually meant that firms bought more servers than needed, which was extremely expensive. But they needed to do this because, as more and more users hit their servers, they wanted to make sure they could scale well without any downtime or outage.
Then we had VMware and IBM (there is still a debate on who introduced it first) introducing Virtualization that allowed us to run multiple operating systems on the same host. This was a game-changer, but also seemed to be very expensive with multiple kernels and OSs.
So fast forward to modern-day containerization, we have this company ‘Docker’ that solves a lot of problems.
Docker makes it easy for developers to develop and deploy apps inside neatly packaged virtual containerized environments. This means apps run the same no matter where they are and what machine they are running on.
Docker containers can be deployed to just about any machine without any compatibility issues, so your software stays system agnostic, making software simpler to use, less work to develop, and easy to maintain and deploy. Simply put, the days of ‘It is working on my machine’ are long gone.
A developer will usually start by accessing the Docker Hub, an online cloud repository of Docker containers, and pull one containing a pre-configured environment for their specific programming language, such as Ruby or NodeJS with all of the files and frameworks needed to get started. Docker is one such tool that genuinely lives up to its promise of Build, Ship, and Run.
Worldwide and across the industry, so many companies and institutes are using Docker to speed up their development activities. PayPal has over 700+ applications now, and they have converted them all into container-based applications. They run 150,000 containers, and this has helped them to boost their dev productivity by 50%.
MetLife, another great example, made huge savings on their infrastructure because they were able to use fewer operating systems to manage more applications. This gave them a lot of their hardware back, and hence they were able to save a lot of money on infrastructure and cost reduction. After moving to Docker, MetLife saw a 70% reduction in VM costs, 67% fewer CPUs, 10x avg. CPU utilisation, and 66% cost reduction. That's the power of Docker for you.
- No hypervisor
Docker is a form of virtualization, but unlike the virtual machines, the resources are shared directly with the host. This allows you to run many Docker containers where you may only be able to run a few virtual machines.
A virtual machine has to quarantine off a set amount of resources like HDD space, memory, processing power, emulate hardware, and boot the entire operating system. Then the VM communicates with the host computer via a translator application running on the host operating system called a ‘Hypervisor.’
On the other hand, Docker communicates natively with the system kernel, bypassing the middleman on Linux machines, and even Windows 10, Windows Server 2016, and above.
This means you can run any version of Linux in a container, and it will run natively. Not only this, Docker uses less disk space too.
In virtualization, the infrastructure is going to represent your server is the bare metal–the host could be your laptop or desktop. On top of that, we have the operating system, something like a Windows server, or for your personal laptop, Mac OS, or a Linux distribution.
In virtualization, we have something known as a Hypervisor. Because we are running these virtual machines, which are basically isolated desktop environments inside of a file, the Hypervisor is what’s going to understand how to read that file. This is what a virtual machine image is, and common Hypervisors like VMware and VirtualBox know how to interpret these operating systems.
On top of that, we have the actual guest OS. Each one of these guest OS will have their own kernel, and this is where things start getting a little expensive from a resource allocation perspective.
On top of the OS is where we would actually install our binaries, libraries, and then finally, we could copy over all of our files on to this operating system that actually makes up our application that we want to deploy to the server.
Now let’s contrast this with containerization. In this, we have the infrastructure and OS, but no Hypervisor. It has a process that directly runs on the operating system known as Docker Daemon, and this facilitates and manages things like running containers on the system, the images, and all of the command utilities they come with Docker.
The applications that we run within these images basically run directly on the host machine. What happens is we create images that are like copies of the application that we want to distribute, and a running instance of an image is what’s known as a container.
Containerization basically kills the ‘It works on my machine but not theirs’ drama.
Image: Image is basically an executable package that has everything that is needed for running applications, which includes a configuration file, environment variables, runtime, and libraries.
Dockerfile: This contains all the instructions for building the Docker image. It is basically a simple text file with instructions to build an image. You can also refer to this as the automation of Docker image creation.
Build: Creating an image snapshot from the Dockerfile.
Tag: Version of an image. Every image will have a tag name.
Container: A lightweight software package/unit created from a specific image version.
DockerHub: Image repository where we can find different types of images.
Docker Daemon: Docker daemon runs on the host system. Users cannot communicate directly with Docker daemon but only through Docker clients.
Docker Engine: The system that allows you to create and run Docker containers.
Docker Client: It is the chief user interfacing for Docker in the Docker binary format. Docker daemon will receive the docker commands from users and authenticates to and from communication with Docker daemon.
Docker registry: Docker registry is a solution that stores your Docker images. This service is responsible for hosting and distributing images. The default registry is the Docker Hub.
Docker, as a tool, fits perfectly well in the DevOps ecosystem. It is built for the modern software firms that are keeping pace with the rapid changes in technology. You cannot ignore Docker in your DevOps toolchain; it has become a de facto tool and almost irreplaceable.
The things that make Docker so good for DevOps enablement are its use cases and advantages that it brings to the software development process by containerizing the applications that support the ease of development and fast release cycles.
Docker can solve most of the Dev and Ops problems, and the main one, ‘It works on my machine,’ enables both the teams to collaborate effectively and work efficiently.
According to RightScale 2019 State of the Cloud Report, Docker is already winning the container game with an amazing YoY adoption growth.
With Docker, you can make immutable dev, staging, and production environments. You will have a high level of control over all changes because they are made using immutable Docker images and containers. You can always roll back to the previous version at any given moment if you want to.
Development, staging, and production environments become more alike. With Docker, it is guaranteed that if a feature works in the development environment, it will work in staging and production, too.
Datadog took a sampling of its customer base, representing more than 10,000 companies and 700 million containers, in its report on the survey, it is shown that, at the beginning of April 2018, 23.4 percent of Datadog customers had adopted Docker, up from 20.3 percent one year earlier. Since 2015, the share of customers running Docker has grown at a rate of about 3 to 5 points per year.
Before approaching Docker, you must know some best practices to reap the benefits of this tool to the fullest extent. Listing down here some Docker best practices to keep in mind,
Build images to do just one thing (Also, See Security Best Practices for Docker Images)
Use tags to reference specific versions of your image
Prefer minimalist base images
Don’t use a root user, whenever possible
Enable Docker content trust
Use Docker Bench for security
Leverage Docker enterprise features for additional protection
Writing a docker file is always critical, build docker image which is slim and smart not the fat one
Persist data outside of a container
Use Docker compose to use as Infrastructure As Code and keep track using tags
Role-based access control
Do not add user credentials/keys/critical data to the image. Use it as a deployment variable
Make use of docker caching, try pushing "COPY . ." to the last line in Dockerfile if possible
Use .dockerignore file
Don't install debugging tools to reduce image size
Always use resource limits with docker/containers
Use swarm mode for small application
Don't blindly trust downloads from the DockerHub! Verify them! See more at ‘DockerHub Breach Can Have a Long Reach’
Make Docker image with tuned kernel parameters
Use alpine image
Docker is a fantastic piece of technology with a high level of adoption, making it a default tool when it comes to embracing DevOps practices. Docker has initiated the digital transformation at various firms.
Millions of users rely on Docker, downloading 100M container images a day, or maybe even more (as per their blog), and over 450 companies have turned to Docker Enterprise Edition – including some of the largest enterprises in the globe.
But recently, Docker got acquired by Mirantis, and this might have a huge impact on the developer community. The problem was, Docker didn't make enough money with their strategic approach of introducing Docker Swarm while Kubernetes took over the world and almost killed Docker Swarm. This led them to move along with someone who is doing something alongside with Kubernetes and Openstack and helping companies deploy with speed.
While all such great companies are making their shift, it is the developer who is going to get affected the most. The companies get acquired and they may or may not be the same the coming year. It is time to step up and make a move.