DEV Community

Cover image for Deployment approaches in Microservices.
Mohammad Quanit
Mohammad Quanit

Posted on • Updated on

Deployment approaches in Microservices.

Deploying monolith applications usually means running one or more servers of a usually large single chunk of application. The deployment of a monolith might not always be a straightforward process but it is much simpler than deploying microservices.

Microservice applications can consist of tens or hundreds of interconnected services written in a variety of different programming languages and frameworks. Each microservice is a mini-application with its own resources, scaling, and deployment and you need to run several instances of a single microservice to scale.

For example, Let's say you have an e-commerce application consisting of some microservices that could be Catalog, Cart, Search, Payment, etc. Now you need to deploy each of these services separately and each service can required to run on more than one instance to achieve scalability for that specific service.

Deployment of microservices written in Golang requires careful planning and consideration of various deployment strategies. These strategies help ensure that your microservices are reliable, scalable, and can be efficiently managed in a production environment. Here are some deployment strategies and practices for microservices with Golang:

Containerization

Containerization is a technique where you build, test, and deploy your application in an isolated manner without interfering with other services. Tools like Docker, LXD, and Podman are used for containerizing & deploying microservices. Each microservice is packaged as a lightweight container along with its dependencies, making it consistent and portable across different environments.

Docker is one of the most popular and widely used container engines that almost all organizations prefer to use. It is highly configurable and developer friendly which makes it an automatic choice for building containers. We are going to learn Docker or Containerization in general in detail in the next module.
Below is an example Dockerfile for Golang:

# syntax=docker/dockerfile:1

FROM golang:1.21

# Set destination for COPY
WORKDIR /app

# Download Go modules
COPY go.mod go.sum ./
RUN go mod download

# Copy the source code. Note the slash at the end, as explained in
# https://docs.docker.com/engine/reference/builder/#copy
COPY *.go ./

# Build
RUN CGO_ENABLED=0 GOOS=linux go build -o /web-server

# Optional:
# To bind to a TCP port, runtime parameters must be supplied to the docker command.
# But we can document in the Dockerfile what ports
# the application is going to listen on by default.
# https://docs.docker.com/engine/reference/builder/#expose
EXPOSE 8080

# Run
CMD ["/web-server"]
Enter fullscreen mode Exit fullscreen mode

Orchestration

When working with containerization using docker or any other tool, we know that there are many microservices that we need to containerize and deploy. Still, the question is to deploy where and how to manage tens or hundreds of containers as we scale up. Orchestration is a technique that handles containers as many as you have in an automated way. Platforms like Kubernetes & Docker Swarm are widely used for managing containerized microservices.

Kubernetes or K8s in short is the most popular container orchestration tool created by Google that most organizations prefer. Docker Swarm is also used for managing containers that are introduced by Docker Inc. itself, lacks some of the features that Kubernetes has, but both these tools provide enough features like scaling, load balancing, service discovery, and rolling updates, etc.

Blue-Green Deployment

The Blue-Green deployments technique in general, is a pattern that applies to both monolith and microservices architecture that strategically involves running two identical environments for your infrastructure. The primary goal of this deployment is to minimize downtime when switching from "blue" (current) and "green" (new) infrastructure. When a new version of a microservice is ready, traffic is switched from blue to green, allowing for easy rollbacks if issues arise. The two environments need to be kept separate but still look as similar as possible. These environments can be made up of different hardware or virtual machines that could be on the same or different hardware.

Blue-Green Deployment improves high availability by keeping microservices available during development and deployment. There will be no downtime because there is already a similar version of your microservices running simultaneously along with the stable one that serves the incoming traffic, so if somehow the stable version crashes or gets unstable the other identical environment will handle traffic. Another benefit is that if the new version isn't working correctly, you can quickly roll back to the previous variation (the blue microservice). Microservices are constantly monitored to track if any issue arises, it should be reverted to the blue state. This technique is also known as Red Black Deployment, a newer term being used by Netflix, Istio, and other frameworks/platforms that support container orchestration This strategy or technique is subtly but powerfully different than Blue-Green Deployment.

The only difference between them is Blue-Green deployments, both versions can get incoming requests at the same time but in Red-Black deployments, only one of the versions is getting traffic at any point in time.

Canary Deployment

Canary Deployment is one of the most popular strategies for deploying the application infrastructure. Like Blue/Green deployment, this technique can also be used with monolith and microservice architectures. It is better to transition slowly from blue to green instead of doing it suddenly.
In canary deployment, engineers deploy the new features or changes gradually in stages and the goal is to show that new feature or change to a specific set of users. This includes releasing the new version of services to a small percentage of the load first and seeing if it works as expected. The canary deployment releases only a single microservice at a time and microservices with higher criticality and risks involved can be made available before others.
To ensure a microservice is thoroughly tested with real users before launch, engineers can use Canary Deployment. This method compares different service versions and reduces downtimes while improving availability. Detecting issues early on prevents critical microservices from being compromised and keeps the entire system safe.

Rolling Deployment

Rolling deployment updates microservices one at a time while keeping the others running. In this strategy, an application's new version periodically replaces the old version of the application & deployments happen over a while. Implementing rolling deployments in your application deployment process can significantly increase high availability and reduce the risk of service disruptions. With this approach, multiple environments are up almost all the time, ensuring that your users can access your application without any interruptions. As the newer version of the application takes up complete charge, the old version gets released, allowing for a seamless transition. This ensures that your users experience minimal downtime and can continue to use your application without any disruption.

Rolling deployments allow deployments incrementally which helps reduce the downtime and reliability of the application by reducing the risk of widespread failures. SREs and DevOps engineers gradually update the server and continuously monitor it, so if there is any issue arises it can be detected early and resolved before the whole system affected.
Rolling deployments are a useful way to simplify the process of fixing issues that may occur during deployment. This strategy updates the system incrementally, so if there are any problems, only the updated servers need to be rolled back instead of the entire system. This gives developers and administrators more control and flexibility to manage the system's integrity.

Serverless Deployment

Serverless unlike its name means application resources are deployed and hosted on some servers where engineers don't need to care about anything related to infrastructure. Platforms that the Cloud companies provide like AWS Lambda, Google Cloud Functions, and Azure Functions are serverless platforms that provide all the resources you need to run a microservice with a Pay-As-You-Go model. Serverless Microservices contain cloud functions that perform highly specific roles within an application. These cloud functions automatically scale based on demand with only paying as much as you use to run those functions.

Serverless Microservices contain serverless functions that are small blocks of code running in response to an incoming request to that microservice. We discussed that microservices are also small independent services that can be scaled and managed independently of one another, so how does serverless fit in this situation?

Just like we can run the microservices in a container platform like docker separately from each other, we can write a single function for each microservice that is running on some cloud vendor without having to manage any overhead. A single microservice can have multiple functions deployed at the same time.  Cloud providers offer a valuable solution for developers by handling the infrastructure, allowing them to focus their efforts on coding. This enables a more efficient workflow and streamlined development process.

Automation & Security Considerations

Automation processes have become a crucial aspect of software delivery especially when building cloud-native applications. Automation in deployment is the process of automating the workflow from developing to testing & the deployment of every single microservice. This process is reliable and effective across SDLC (Software Development Lifecycle). The goal of making the deployment process automated is to eliminate the challenges of manual deployments and enhance the quality and pace of releasing microservices.

DevOps engineers usually manage all sorts of deployments in infrastructure and are responsible for fixing any issues that come up with deployments. There are tons of tools that help DevOps engineers set up automated processes using CI/CD (Continuous Integration/Continuous Deployment) pipelines to automate the deployment process. CI/CD tools like Jenkins, Travis CI, or GitLab CI/CD can help automate testing, building, and deploying microservices. Deployment automation can also help engineers get quick feedback because it is less error-prone and can be released at a higher frequency, so you get your feedback immediately.

When designing and working with microservices, security is another thing to consider on levels of the infrastructure. We discussed security testing in the last module, It is important to implement security best practices at every stage of deployment, including secure communication, access control, and vulnerability scanning. When considering secure communication, always try to use HTTPS for secure communication between services.

As an engineer who is responsible for the deployment, you should know about phishing and credential stuffing. But it's also important to be on the lookout for attacks that come from within your network. To ensure your network stays safe, it's a good idea to use HTTPS for your microservices architecture. When using dependencies create automated workflows to detect issues in dependencies that scan your codebase.

Snyk is one of the most popular tools to work with security stuff and helps you to find vulnerabilities in your not just codebase but infrastructure.

Top comments (5)

Collapse
 
rsiv profile image
Rak

Nice, for your serverless deployments you could consider automating the creation of your containers and IaC using Nitric, this could let you focus more on your microservices logic.

Collapse
 
dpark profile image
Dan

Appreciate the in-depth write-up! We're actually working on something to help specifically on the orchestration piece especially during dev: github.com/kurtosis-tech/kurtosis

Collapse
 
dray92 profile image
Debosmit Ray

note - checking out argocd would be quite worthwhile, esp if the microservices are getting deployed to K8s

Collapse
 
mquanit profile image
Mohammad Quanit

Sure, i will look into it.

Collapse
 
debojyotichatterjee9 profile image
Debojyoti Chatterjee

Dont you think this article is a mix of ways how you can deploy your microservices and deployment/release strategies?

Some comments may only be visible to logged-in visitors. Sign in to view all comments.