Wait, Docker? Yup, I’ve heard of it but what is that exactly?
To provide some background information to set the base Docker was developed by Solomon Hykes. Containers (more on this later), on the other hand, have been around for a long time, with early implementations dating back to the early 2000s. However, modern container technology was popularized by Docker, which made containers much easier to use and more accessible to developers.
Alright, here is a quick scenario; I have a project, it works fine on my machine but when run on another person’s computer, it starts glitching or not working. I wonder how could that even be possible. As it runs fine on mine. Hence, to solve these kinds of issues we use DOCKER.
The formal google definition says:
Docker is an open-source platform that enables developers to build, deploy, run, update and manage containers — standardized, executable components that combine application source code with the operating system (OS) libraries and dependencies required to run that code in any environment.
So, let’s make it a little more simple :)
Docker is a tool that lets people put their computer programs into a special container so that they can run on any computer, even if the computer is different from the one the program was originally made for. This makes it easier for people to share their programs and for the programs to work on different computers.
Let’s break into points now because too many big paragraphs create confusion:
Docker is compatible with any programming language (It is almost like a sheet of paper on which you can write anything)
Docker has sealed air-tight containers (WHAT? What are we talking about?) The containers mentioned are the heart of Docker. (These are like self-contained package that makes it easy to move software around and run it on different computers.)
Containers wrap up the entirety of our code and are portable! (Portable HOW?) You can take this container and load it into somebody else’s computer and the code will run exactly how it did on your computer.
Docker does also have the concept of “container networking” — Which means despite running on different hosts, containers can communicate as if they are on the same network.
Now, a formal definition of a container:
A container is a lightweight and standalone executable package that contains everything needed to run the application, including the application’s code and all of its dependencies.
Let’s rewind a little bit, So:
Docker is a software
Allows to create containers
Containers are super powerful and tightly packed
(OK, but what can the containers include?)
Docker containers include:
CODE (Your application code)
DEPENDENCIES (like libraries, framework etc needed for the app to run)
CONFIGURATION FILES (are used to define how an application or system behaves in different environments, such as development, testing, and production.)
PROCESSES (container runs one or more processes that make up the application, these run in their isolated env so that they don’t interfere with other processes running in the same container (how cool!))
NETWORKING (allowing a container to communicate with other containers or the host system)
OPERATING SYSTEM(some chunks of it) — Docker containers typically use a lightweight, stripped-down version of an operating system, known as a container image(hmm), which contains only the necessary components to run your application. And much more!
Docker can run and manage containerized applications(?) on a server. Therefore, Docker can also act like a service and be deployed on a server so that you can take the container and place it wherever you like!
(?) So quickly then, what’s a “containerized application”? — It is a software application packaged with all its dependencies, libraries, and configuration files in a container.
So, let’s wrap it up!
Docker is a tool for creating and running software applications. With this, you can create self-contained packages called containers that contain all the code, libraries, and dependencies needed to run the application. Docker provides a software application that you can install on your computer to create and manage these containers. And, Containers created with Docker are standardized and portable, meaning they can be easily moved and run on different machines and environments.
Now that we gathered some insights, let’s glance through the Docker architecture:
Docker Daemon: This is the background process that manages images, containers, and networks.
Docker CLI: This is the command-line interface tool that allows users to interact with the Docker daemon.
Docker Images: These are the read-only templates used to create containers.
Docker Registry: This is a storage and distribution system, that allows users to share and distribute their Docker images with others.
Docker Container: This is a runnable instance of a Docker image.
Together, these components enable users to create, manage, and run Docker containers consistently and efficiently.
Now let’s discuss “KUBERNETES or K8”
Some background: Kubernetes was developed by Joe Beda, Brendan Burns, and Craig McLuckie at Google.
And now, a short formal definition,
> K8s is used to orchestrate containerised cloud-native microservices apps.
Let’s break the definition down to a simple understanding,
Orchestrate: refers to the process of managing the app (manage as in deploys the app, scales it up or down based on demand, self-healing or resilient( automatically replace or restart any containers that fail or become unresponsive), roll updates and rollback also.)
Containerize: (Pack app and put it on the cloud) — Package the application and its dependencies into a container image, which can be easily deployed and run on any infrastructure that supports containers.
Containerized apps: (apps that run in the container) — are applications that have been packaged into container images and can be run in containers.
Cloud Native: (Essentially something that understands the cloud’s terms and conditions) — means designing and building software applications that are optimized for the cloud, with features like auto-scaling, self-healing, rolling updates, and rollbacks. Cloud-native apps are specifically designed to work well in cloud environments and take advantage of cloud services and tools.
Microservices: an application that is built from a lot of smaller independent specialised parts that work together.
When building a microservice, for example, if specialized applications like authentication, video conversion, video streaming, certification, and discussion sections are bundled together. Then, to manage all of these applications, a special tool is needed, and that’s where Kubernetes comes in.
> Kubernetes is a tool that helps manage specialized applications bundled together to build a microservice.
Kubernetes is a tool designed to be cloud-independent, which means it can work with any cloud infrastructure, whether it’s AWS, Google Cloud, or your cloud. It allows us to easily move our applications and orchestration between different cloud providers, such as AWS, Google Cloud, or even your own cloud. You can move your applications and orchestration across any cloud infrastructure without being locked into a specific one, as Kubernetes allows for seamless migration of both.
In the past, we used to package all our application components into a monolithic app and deploy it to the servers. But now, that is changed due to microservices coming into existence.
Today, applications are not built by a single team, but rather by multiple teams working together. For example, each team focuses on a specific aspect of the application, such as video playback, certificate generation, certificate dispatch, or authentication flow. These teams communicate with each other but work independently, and together they form a collection of microservices!
We need to containerize our app by designing and packaging it in a way that includes not only the code but also the dependencies and some parts of the OS so that everything needed to run it moves along with the app.
This means that using Docker alone is not enough to fully containerize an application. Additional tools and technologies, such as containerd and other container runtimes, may be needed to create a complete container environment for running applications.
Alright, pause! But what are some of the benefits of using Kubernetes?
Users experience no downtime during deployments or updates
Load balancing and scaling capabilities for high performance
Built-in disaster recovery features for backup and restoration.
Okay, now let’s quickly glance over Kubernetes architecture,
- Kubernetes architecture consists of a master node and worker nodes.
- The master node controls the state of the cluster and coordinates tasks, while worker nodes run the containers and communicate with the master. The worker nodes are where the actual work happens.
- Each worker node has a Kubernetes process called Kublet, which allows it to communicate with other nodes and execute tasks.
- The worker nodes have different numbers of Docker containers where the applications are deployed.
- Kubernetes shields(abstracts) developers from worrying about the specifics of underlying infrastructure and instead provides a simple, declarative way(declarative API) to describe how an application should look and behave.
- Pods are the smallest unit of deployment and consist of one or more tightly coupled containers. The containers work together inside the Pod and share things like memory and files so that they can do their job well. (For example, one container might show a website, while another container might save information about the website.)
- Higher-level abstractions such as deployments, replica sets, and services are built on top of pods to provide scalable and fault-tolerant application deployments.
- The master **node runs the processes to manage and run the cluster (a Kubernetes cluster is a set of nodes (physical or virtual machines) that run containerized applications), including the API server.
- The API server exposes a RESTful API that can be used to create, read, update, and delete Kubernetes objects such as pods, services, and deployments.
- Kubernetes lets developers create their own custom objects and rules (custom resources and controllers) for managing those objects. This means we can create our own special tools and use them to automate different parts of Kubernetes that were not possible before.
- The distributed key-value store called etcd stores the configuration and state of the cluster.
In conclusion, Docker enables consistent and isolated app packaging and execution in lightweight, portable, and scalable containers. Kubernetes automates the deployment, scaling, and management of containerized apps, transforming modern app development and management in the cloud-native era.
I extend my sincere gratitude for taking the time to read this document. I hope that the information provided has contributed to your understanding of Docker and Kubernetes. Once again, thank you for your attention, and I wish you all the best in your future endeavours.
Until next time!
Top comments (0)