DevOps is the intersection of Development and Operations, application release processes. When working on an application, the developer's main goal is to deliver the result to the end user, regardless of whether you use the Agile or Waterfall approach. The user must be able to use the app no matter what.
DevOps means making this continuous delivery process faster with minimal errors and preventing bugs. So, DevOps provides high-quality and well-tested improvements delivered to users.
This article accompanies the webinar which I had you can watch the full version.
The first and most important one is:
Miscommunications and lack of collaboration between developers and operations. DevOps has to keep in mind that releasing an application includes the following:
- Coding the app, deploying, and running it;
- Developers are responsible for coding, and operations are responsible for running.
Developers usually face the problem of "I wrote an application, but I can not deploy and run it", while operations deal with the "I am running the app, but I don't know how it works". Developers code without considering where and how it will be deployed. And Operations try to deploy without understanding what and why they are deploying.
This usually results in miscommunications between these two. After finishing the code, developers often end up documenting the deployment guide too poorly, or some app's features might cause too many issues. As a result, the operations team struggles to deploy; they might even send it back with improvement suggestions. This kind of miscommunication could cause the release period to last for days, weeks, or even months.
So, there is no clearly defined automated process between the end of coding and the start of deployment. And even if there is one, it is based on complex checklists, with a constant need to get decisions approved by both teams.
Traditionally, one of the teams is responsible for the development, while the other one has to deal with operations. These two have very different agendas, which makes it hard for them to cooperate properly. Developers want to push new features faster; the operations team, on the other hand, needs to make sure that those changes will not cause any issues since operations maintain the stability of production. They focus on making the app available, stable, safe, and so on.
This means that the app might take longer to be released, especially considering that the operations don't understand the code completely.
For example, a developer created a new feature that consumes too many resources in the production environment. Servers get overloaded and crush — now the operations team needs to fix it. Since it is the operation team that needs to put out the fire, developers often are not as careful as operations about every single change to the product and stability. Even though every employee's main goal must be delivering high-quality applications to end users fast. In practice, the developers do want to implement features and release them as quickly as possible, while operations focus on maintaining the system's stability. This is why they often resist new changes, creating a conflict of interest. So, this kind of setup naturally makes it difficult for these two teams to collaborate.
Every company has to keep security their first priority when working on a new feature. The operations and security teams have to carefully evaluate any changes to make sure that nothing can affect system stability. In a traditional setup, this manual process as operations takes days, weeks, and months.
As mentioned before, DevOps is all about removing any roadblocks that slow down the process, including security issues. That is why the DevSecOps term was created — to highlight and remind teams about the importance of security.
Many projects hire a separate team or create new roles for testing applications on different levels, such as specific features, the complete app, different environments, and performance. Often these tests have to be done manually: teams can not always rely on their automated tests.
And only after manual testing is over can changes be released. Even though this may not be done by development or operation teams but rather by a separate tester role, this is an important part of the release process. It may also slow down the release process considerably.
Many tasks, such as testing, security checks, and deployments, used to be done manually during the release process. For example, operations would do most of their tasks manually by either directly executing commands on the servers to install tools, configure stuff, patches, or by creating a script or small program for execution.
But both cases include manual work — deploying applications, preparing the deployment environment, and configuring servers, user access, and permissions. This makes the process slower and more error-prone. With manual work, you have the disadvantage of knowledge sharing. It is complicated since people, who do all these tasks, have to document it, while others have to read it. It is also not a transparent, easy-to-trace process.
If infrastructure configurations are done manually, and something happens to infrastructure, recovering and replicating the exact same infrastructure state fast becomes impossible.
You have to remember exactly what was done to the servers. Again, the release process slows down due to new roadblocks.
When it comes to security and tests, a DevOps engineer has the right qualifications to go over the tasks of both the developer and operations teams. DevOps can remove all mentioned roadblocks that slow down the release process, whatever the problem might be.
Instead of manual, inefficient processes, DevOps can provide fully automated, streamlined ones that will make apps' releases easy and efficient.
DevOps is a combination of practices and tools that makes the releasing software fast and high quality.
DevOps allows developers and operations to work together more often. Different companies implement DevOps differently, so there is no standard way to become a DevOps engineer.
However, since the start of DevOps adoption, the process has taken on a more specific form with some common patterns in many companies, including creating an actual DevOps engineer role. The set of technologies used to implement the DevOps principles is called DevOps technologies which every DevOps engineer needs to learn. DevOps includes well-known CI (Continuous Integration) and CD (Continuous Delivery) processes.
Let's see what makes up a CI/CD pipeline, what tools and concepts you need to learn to become a DevOps engineer, their tasks and responsibilities, and the line between DevOps and the development and operations teams.
It all starts with the application coded by developers that use specific technology stacks, different programming languages, and building tools. They will have a code repository to work on. One of the most popular ones is GIT. You, as a DevOps engineer, will not be programming the app. They need to understand how developers work, which GIT workflows they prefer, and how apps are configured with other services, such as databases, basic automated testing concepts, and so on.
DevOps has to create some infrastructure on-premise or cloud servers when the application is deployed in the server for users to access it. Again, as a DevOps engineer, you will be responsible for preparing the infrastructure to run the application. Since most of the servers and applications are running on Linux servers, basic knowledge of Linux (Linux shell commands and Linux File System), CLI, administering a server, and how to SSH into the server are required too.
It would be best if you also learned the basics of networking and security, such as configuring firewalls to secure applications and opening ports to make them accessible from outside.
Other necessary skills include:
- Learning how IP addresses Ports
However, DevOps engineers don't have to know advanced networking and security concepts and be able to administer the server from start to finish. System admins and network and security engineers usually specialize in these areas.
Your job as a DevOps engineer is to understand the concepts enough that you can prepare the server to run your application but not manage it and the whole infrastructure.
Nowadays, applications are run as so-called containers. This means you have to generally understand the virtualization and container concepts and manage containerized applications on the server. One of the most well-known containerized solutions today is Docker.
On the one hand, developers create new features and fix bugs. On another, we have an infrastructure and servers configured to run applications. The question is how to transfer new features and bug fixes from the
development team to the servers and make them available to the end users quickly and efficiently. This is the DevOps engineers' main goal.
The next step includes saving this artifact somewhere, for example, the image repository, Docker artifact repository on DockerHub, ECR (Amazon Elastic Container Registry). This means that DevOps engineers must understand how to create and manage artifact repositories by creating one pipeline that does all these in sequential steps. GitHub Actions, GitLab, or Jenkins automations can help you with that.
Connect pipelines with the Git repository to get the actual development code. This is part of the Continuous Integration (CI) process, when code changes in the Git repository are continuously tested. You only want to deploy new features and bug fixes to the server after they are tested, built, and packaged.
There could be more steps, for example, sending Slack notifications to
the team regarding the pipeline state or handling failed deployment. This flow represents the core of the CI/CD pipeline.
The CI/CD pipeline is at the core of all DevOps tasks and responsibilities. As a DevOps engineer, you should be able to configure it completely.
Nowadays, many companies use virtual infrastructure on the cloud (also known as infrastructure as a Service, such AWS, Google Cloud, and others) instead of creating their own physical infrastructure.
Doing your job would be impossible without learning the core concepts of at least one cloud infrastructure. These platforms manage lots of things for you. For example, using the UI admin portal of Cloud providers, you can create networks, firewalls, and all parts of your infrastructure through services.
For example, say your application runs on AWS. You will need to learn AWS cloud provider and its services (please, note that AWS is pretty complicated, but you don't have to know every single one of their services).
Our applications will run as containers. To manage them, Docker should be enough, but if you have a lot of containers and microservices, a more powerful container orchestration will be needed. The most popular one of them is Kubernetes.
Kubernetes is a powerful but very complex tool, so it requires a lot of effort to set up and manage multiple clusters for different teams.
Thousands of containers and hundreds of servers. How does one track performance of an individual application and infrastructure problems?
DevOps engineers set up and monitor applications, underlying Kubernetes clusters, and servers. Monitoring tools, such as Prometheus, usually help with that.
Every project requires testing and development environment to properly prepare the application to deploy it. Creating and maintaining one infrastructure already takes a lot of time, and it is very error-prone. We don't want to do any testing manually.
As mentioned before, every DevOps engineer aims to automate as many processes as possible. So how do we automate creating infrastructure, configuring, and deploying? This can be done by two types of Infrastructure as Code tools:
- Infrastructure provisioning tools (Terraform, Pulumi)
- Configuration management tools (Ansible, Chef)
As a DevOps engineer, you must know at least one of these types to make.
Since you would be working closely with developers and system administrators to automate some of the tasks, you will be writing scripts for them, maybe small applications, such as backups, system monitoring, cron jobs, or network management. In order to do that, you need to know a scripting language. This could be operating system-specific scripting languages, such as bash for Linux/Mac or Powershell for Windows, or more powerful and flexible languages (Golang or Python). These programming languages work regardless of what kind of operating system you have on servers or locally.
Golang is easy to learn, easy to read, and flexible. It has libraries for most of the databases, as well as different cloud platforms, such as AWS and Google Cloud.
You might be thinking right now: "How many of these tools do I need to learn? Do I need to learn multiple tools in each category? Which ones do I choose?"
You should learn the most popular and widely used tool in each category. Because once you start understanding one tool's concept, properly using alternative ones will become much easier.
It is important to learn these technologies all at once because this is what DevOps engineers do. It would be best if you started to use them on actual projects right away.