DEV Community

loading...

.Net microservice with gRPC containerised using Azure Container Registry and deployed on Azure Kubernetes Service

c_arnab profile image c-arnab Updated on ・14 min read

In this post, you will learn about gRPC, protocol buffers and building high performance services in .NET 5 using C#. Then you learn to test the service locally using a tool grpcui which is similar to postman but for gRPC. Then you containerise the image and publish the same to Azure Container Registry using ACR task. Then you test the containerised application by running the image on Azure Container Instance and building a gRPC client in .NET 5 using C#. Finally, you deploy the service to Azure Kubernetes Service.

Microservices are booming.
From lower costs to better performance to less downtime to the ability to scale, microservices provide countless benefits relative to monolithic designs. Rather than using a text-based messaging protocol like Representational State Transfer (REST), leveraging a binary protocol which make serialization or deserialization easier and that is optimized for inter-service communication is ideal. gRPC, implemented in Google and open sourced is a high-performance RPC framework which is the most viable solution for building communication between internal microservices.

Develop gRPC Service

Foundational concepts of gRPC

Since a long, long time, developers have been building applications which speak with other applications - java applications speaking with java applications, .Net applications speaking with .Net applications and so on.
Then SimpleObjectAccessProtocol was born such that applications in one language could speak with applications of another language.
Along came WCF by Microsoft. Though WCF also had NETTCP binding, a fast binary encoding to speak between .Net clients and servers, the primary mechanism of communication in SOAP was XML. XML (lots and lots of it) made communication slow and there was a need for something easier for machines to understand as well as a better mode of communication within and across data centers.
And Google created their own rpc framework & binary format - protobuf. Later after HTTP/2 was created, google crated a new rpc framework, called it grpc and made it public. And we get an insanely powerful, fast, highly optimised protocol without the overhead of a soap envelope.
Unlike SOAP where different languages generated XML which would pass between datacenters, gRPC relies on Protocol Buffers language to define service contracts independently from the programming language being used to develop the applications and uses protocol buffers (protobuf) as the binary data interchange format for inter-service communication.

Further Reading on basics

gRPC - A Modern Framework for Microservices Communication
grpc.io

Create a gRPC Service using VS Code

We will be building a simple service which Checks for availability of products in inventory.
Open VS Code, Open a new terminal and run the command
dotnet new grpc -o ProductAvailabilityService

A folder with name ProductAvailabilityService will be created and templated application will be generated.
Move inside the folder ProductAvailabilityService
cd ProductAvailabilityService

Run the command
dotnet add ProductAvailabilityService.csproj package Grpc.AspNetCore.Server.Reflection
This package allows us to use Reflection and will help us test our service.

gRPC natively supports the ability to define a service contract where you can specify the methods that can be invoked remotely and the data structure of the parameters and return types. Using major programming language provided tools, a server-side skeleton and client-side code (stub) can be generated using the same Protocol Buffers file (.proto) which defines the service contract.

Delete the template generated greet.proto file in ProductAvailabilityService/Protos folder.
Then, add a new file named product-availability-service.proto in the same folder.
Here we define apis and the messages in a language-agnostic way.
Here, ProductAvailabilityCheck is the service definition which contains a method CheckProductAvailabilityRequest.
The method takes in a request ProductAvailabilityRequest and sends back a response ProductAvailabilityReply. The request and response objects along with the implementation base class will be generated automatically using the Grpc.Tools NuGet package and there is no need to implement the complexities.

// Protos/product-availability-service.proto

syntax = "proto3";

option csharp_namespace = "ProductAvailabilityService";

package ProductAvailability;

service ProductAvailabilityCheck {
  rpc CheckProductAvailabilityRequest (ProductAvailabilityRequest) returns (ProductAvailabilityReply);
}

message ProductAvailabilityRequest {
  string productId = 1;  
}

message ProductAvailabilityReply {
  bool isAvailable = 1;
}
Enter fullscreen mode Exit fullscreen mode

The Nuget package knows which proto file to look for to generate the types from .csproj file.
So, we go to .csproj file and update the .proto file as shown below.

<ItemGroup>
    <Protobuf Include="Protos\product-availability-service.proto" GrpcServices="Server" />
</ItemGroup>
Enter fullscreen mode Exit fullscreen mode

Delete the template generated GreeterService.cs file in Services folder.
Create a new file ProductAvailabilityCheckService.cs in services folder.

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Grpc.Core;
using Microsoft.Extensions.Logging;

namespace ProductAvailabilityService
{
    public class ProductAvailabilityCheckService: ProductAvailabilityCheck.ProductAvailabilityCheckBase
    {
        private readonly ILogger<ProductAvailabilityCheckService> _logger;
        private static readonly Dictionary<string, Int32> productAvailabilityInfo = new Dictionary<string, Int32>() 
        {
            {"pid001", 1},
            {"pid002", 0},
            {"pid003", 5},
            {"pid004", 1},
            {"pid005", 0},
            {"pid006", 2}   
        };
        public ProductAvailabilityCheckService(ILogger<ProductAvailabilityCheckService> logger)
        {
            _logger = logger;
        }

        public override Task<ProductAvailabilityReply> CheckProductAvailabilityRequest(ProductAvailabilityRequest request, ServerCallContext context)
        {
            return Task.FromResult(new ProductAvailabilityReply
            {
                IsAvailable = IsProductAvailable(request.ProductId)
            });
        }

        private bool IsProductAvailable(string productId) {
            bool isAvailable = false;

            if (productAvailabilityInfo.TryGetValue(productId, out Int32 quantity))
            {
                isAvailable = (quantity > 0) ? true : false;
            }

            return isAvailable;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Here a dictionary productAvailabilityInfo acts as a dummy inventory data store for products with productid – pid001 to pid006 and available value.
The method CheckProductAvailabilityRequest defined in .proto file is implemented here. The implemented method takes in the request object along with ServerCallContext useful in authentication / authorization and returns the response object.

Open startup.cs file and update greeterservice to ProductAvailabilityCheckService.
Also, add services.AddGrpcReflection() and endpoints.MapGrpcReflectionService()

 public void ConfigureServices(IServiceCollection services)
        {
            services.AddGrpc();
            services.AddGrpcReflection();
        }
Enter fullscreen mode Exit fullscreen mode
 app.UseEndpoints(endpoints =>
            {
                endpoints.MapGrpcService<ProductAvailabilityCheckService>();
                endpoints.MapGrpcReflectionService();

Enter fullscreen mode Exit fullscreen mode

Finally, to setup a HTTP/2 endpoint without TLS, open program.cs and add the code below inside the method call to ConfigureWebHostDefaults to run the service in http mode in port 5000.

 .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.ConfigureKestrel(options =>
                    {
                        // Setup a HTTP/2 endpoint without TLS.
                        options.ListenLocalhost(5000, o => o.Protocols = 
                            HttpProtocols.Http2);

                    });
                    webBuilder.UseStartup<Startup>();
                });
Enter fullscreen mode Exit fullscreen mode

Also add the using statement
using Microsoft.AspNetCore.Server.Kestrel.Core;

Run the service by typing in terminal
dotnet run
Service will be up and running using kestral webserver.

Image1

Test Service Locally

To test our service we use grpcui which is like Postman, but for gRPC APIs instead of REST.

To install the tool first install GO Golang Site

Then run the two commands below in powershell ( run as Administrator).
go get github.com/fullstorydev/grpcurl/...
go install github.com/fullstorydev/grpcurl/cmd/grpcurl

Next run the tool with serverhost address and server port number

grpcui -plaintext localhost:5000

gRPCui will expose a local URL to test the gRPC service.
Image2

Open the URL shown and add a productid as shown below.
Image3
The product availability will be shown as response
Image4

Test service on Azure

Next, we run the the grpc service in a container on Azure Cloud. Though we can build the container image on our workstation by installing Azure CLI and Docker engine, we will not do so but rather do so on Azure using Cloud Shell and Azure Container Registry task and push the image to Azure Container Registry.

Update program.cs to below code as we no longer wish it to listen to localhost but any ip on port 80.

.ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.ConfigureKestrel(options =>
                    {
                        // Setup a HTTP/2 endpoint without TLS.
                       // options.ListenLocalhost(5000, o => o.Protocols = 
                       //     HttpProtocols.Http2);
                        options.ListenAnyIP(80, o => o.Protocols = 
                            HttpProtocols.Http2);
                    });
                    webBuilder.UseStartup<Startup>();
                });
Enter fullscreen mode Exit fullscreen mode

Dockerfile Basics

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

Create Dockerfile

Create a new file in the service project, name the same Dockerfile and add the below contents.

FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY "ProductAvailabilityService.csproj" .
RUN dotnet restore "ProductAvailabilityService.csproj"
COPY . .
RUN dotnet build "ProductAvailabilityService.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "ProductAvailabilityService.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "ProductAvailabilityService.dll"]
Enter fullscreen mode Exit fullscreen mode

Lets understand what is happening in the dockerfile.
At first restore the packages and dependencies application requires to build. Next, build the application and then publish the same in /app/publish directory. At this stage .NET Core SDK available at Microsoft Container Registry (MCR) is used as base image.
Next, use the .NET Core Runtime as base image and copy the binaries generated in the previous stage to /app directory. Please note assembly name in ENTRYPOINT is case-sensitive.

Add source code to github

Create a .gitignore file either manually or using .gitignore Generator Visual Studio Marketplace.

Create an empty github repository at https://github.com/new

Run the following commands in terminal.

Initialise the local directory as Git repository.
git init

Add all the files in the local directory to staging, ready for commit.
git add .

Set your account’s default identity
git config --global user.email “your-emailid@your-email-domain.com”
git config --global user.name "your-username"

Commit the staged files
git commit -m "first commit"
git branch -M main

To push your local repository changes to add your GitHub repository as remote repository
git remote add origin My-GitHub-repository-URL
(I replace My-GitHub-repository-URL to https://github.com/c-arnab/grpc-article.git where you can find my entire source code)

Push local repository to the remote repository we added earlier called “origin” in its branch named master.
git push -u origin main

You can now see all your files in the GitHub repository.

Build Container Image

Login to Azure Portal at https://portal.azure.com/

Select Azure Shell and then bash option
Image5
Its always advisable to create a new resource group when working on a new project or in this case code in an article.

Run the following command to create a resource group.
az group create --name grpc-container-demo --location southindia --subscription 59axxx4d-xxxx-4352-xxxx-21dd55xxxca0
Please ensure you add your own subscription id after subscription

Azure Container Registry (ACR) is an Azure-based, private registry, for container images.
Run the following command to create an Azure Container Registry (ACR). You will need to provide a unique name. The --admin-enabled flag will be useful while running the container image in container instance.
az acr create --name MyGrpcContainerRegistry --resource-group grpc-container-demo --sku Standard --admin-enabled true

Change directory to clouddrive

Clone the code with Git
git clone https://github.com/c-arnab/grpc-article.git

Change directory to grpc-article
Image6

ACR task, a function of ACR is a container image build service on the cloud.
Build container image using the az acr build command and store it in Registry.
az acr build --image product-availability-service:latest --registry MyGrpcContainerRegistry .

Run the following command to list and view image generated in Azure Container Registry
az acr repository list --name MyGrpcContainerRegistry --output table
Image7

Run image in Azure Container Instance

Azure Container Instances is a service that enables a developer to deploy containers on the Microsoft Azure public cloud without having to provision or manage any underlying infrastructure.

Once image is in ACR, its easy to test the application in container image.

Search for container registries in Azure portal and select the same.
Image8
Select the registry created and then select Access Keys under Settings
The --admin-enabled flag while creating the registry has ensured that Admin user is enabled.
Image9
Select Repositories under Services and then select the product-availability-service under Repositories.
Click on the three dots on the right side of the latest tag and select Run instance.
Image10
In the ensuing screen, Create Container Instance by providing a name and select a location near you. Then Click OK.
Image11
Once the deployment completes, you should receive a notification and clicking the same one can reach the Container group and view the Container Instance and confirm that it is running. Finally get the IP Address which will be required when we create a client for the service next.
Image12

Create a gRPC client

For this article we create a console app as a client which in turn calls CheckProductAvailabilityRequest method in the service.

Close all applications/folder in VS Code and open a new terminal.

Run command to create a console app
dotnet new console -o ProductAvailabilityClient

Open folder / application ProductAvailabilityClient

In the terminal run following commands to add necessary GRPC libraries

dotnet add ProductAvailabilityClient.csproj package Grpc.Net.Client
dotnet add ProductAvailabilityClient.csproj package Google.Protobuf
dotnet add ProductAvailabilityClient.csproj package Grpc.Tools

Create a folder Protos and drop the .proto file used in the service application in this folder.

Replace the program.cs code with the following code

using System;
using System.Threading.Tasks;
using ProductAvailabilityService;
using Grpc.Net.Client;

namespace ProductAvailabilityClient
{
    class Program
    {
        static async Task Main(string[] args)
        {
            var channel = GrpcChannel.ForAddress("http://52.226.2.157:80");
            var client =  new ProductAvailabilityCheck.ProductAvailabilityCheckClient(channel);
            var productRequest = new ProductAvailabilityRequest { ProductId = "pid001"};
            var reply = await client.CheckProductAvailabilityRequestAsync(productRequest);

            Console.WriteLine($"{productRequest.ProductId} is {(reply.IsAvailable ? "available" : "not available")}!");
            Console.WriteLine("Press any key to exit...");
            Console.ReadKey();
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Replace the ip address with the ip address of your container instance. Please note the ProductId. You can change this value to any other.

Run the console client by typing in terminal
dotnet run

The console should show whether the product id is available. Change the value of the product id and rerun the application to reconfirm that the application works.

Deploy on Azure Kubernetes Service

Foundational concepts of Kubernetes

Containers are not new. The fundamental container goes back to 1979 with Unix’s chroot that allowed to separate processes. Then came linux containers in 2008. Docker made its appearance in 2013 and gave the power of containers in the hand of developers.

And then Google came up with Kubernetes to run these containers at scale with intelligent deployment, auto repair, horizontal scaling, service discovery & load balancing, automation of deployments & rollback and key / secrets management.

An application running on Kubernetes is known as workload and are run inside a set of pods. A workload can have a single component deployed in a single pod or several that work together deployed in a single pod or multiple pods.
You can deploy container images in kubernetes either in a pod (Naked Pod) or rather than managing each Pod directly, use workload resources that manage a set of pods like Deployment and ReplicaSet.
Its advised not to use naked Pods (that is, Pods not bound to a ReplicaSet or Deployment) as Naked Pods will not be rescheduled in the event of a node failure.
Service is an abstract way to expose an application running on a set of Pods as a network service.

Azure Kubernetes Service (AKS) is a managed Kubernetes service in which the master node is managed by Azure and end-users manages worker nodes. Users can use AKS to deploy, scale, and manage Docker containers and container-based applications across a cluster of container hosts.

Create Service Principal

While creating container registry we had provided the same admin access with --admin-enabled flag to easily provide access to images to run from container instance but providing admin access is not a best practice from security perspective. While deploying to AKS we use service principal which will enable applications and services (AKS) to authenticate to container registry.

Get the full Container Registry ID
ACR_ID=$(az acr show --name MyGrpcContainerRegistry --query id --output tsv)

Create the Service principal with proper rights
SP_PASSWD=$(az ad sp create-for-rbac --name http://acr-service-principal --scopes $ACR_ID --role owner --query password --output tsv)
SP_APP_ID=$(az ad sp show --id http://acr-service-principal --query appId --output tsv)

Please copy the service principal Id and password to safe place as we will need them later.
echo "ID: $SP_APP_ID"
echo "Password: $SP_PASSWD"

Create Cluster

Create an AKS cluster using the az aks create command.
az aks create --resource-group grpc-container-demo --name mygrpcdemocluster --node-count 2 --generate-ssh-keys

IF you receive an error, check if you have VMs with your default subscription and region.
az vm list-skus -l southindia --query '[?resourceType==virtualMachines&& restrictions==[]]' -o table

If the above command does not provide any data, try the same with other locations for example try centralindia in place of southindia.
Image13
Once you get data run again the az aks create command with --location flag
az aks create --resource-group grpc-container-demo --name mygrpcdemocluster --node-count 2 --generate-ssh-keys --location centralindia
Image14

Connect to Cluster

Kubernetes command-line client, kubectl is used to manage a Kubernetes cluster which is already installed in Azure Cloud Shell.

Use the az aks get-credentials command to download credentials and configure kubectl to use the same to connect to Kubernetes cluster
az aks get-credentials --resource-group grpc-container-demo --name mygrpcdemocluster

View namespaces
kubectl get namespaces

Create a namespace to work in
kubectl create namespace mygrpcdemospace

Set created namespace to current namespace
kubectl config set-context --current --namespace=mygrpcdemospace

Confirm you are in correct namespace
kubectl config get-contexts

Deploy application

Kubernetes uses an image pull secret to store information needed to authenticate to registry. To create the pull secret for an Azure container registry, run the following command with a name (my-acr-secret), service principal id and password you created earlier along with your registry url.

kubectl create secret docker-registry my-acr-secret \
    --namespace mygrpcdemospace \
    --docker-server=mygrpccontainerregistry.azurecr.io \
    --docker-username=eddeabb7-6f26-4e72-b741-549f67e83736 \
    --docker-password=j_L.57Om6S1Pqh9-V8_iSFPSk9m7jFD054
Enter fullscreen mode Exit fullscreen mode

Image15
A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we will use Deployments instead of directly using ReplicaSets to deploy our container.

Run the following command to create a manifest yaml and submit it to our Kubernetes cluster to create a new Deployment object named "grpc-demo-deployment". Do note the image and imagePullSecrets value and change it to your values.

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grpc-demo-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: grpc-demo
  template:
    metadata:
      labels:
        app: grpc-demo
        env: dev
    spec:
      containers:
      - name: grpc-demo
        image: mygrpccontainerregistry.azurecr.io/product-availability-service:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: my-acr-secret
EOF
Enter fullscreen mode Exit fullscreen mode

Check if the Deployment was created
kubectl get deployments
Image16
View the current ReplicaSets deployed
kubectl get rs

Check for the Pods brought up
kubectl get pods --show-labels
Image17
Run the following command to create a new Service object named "grpc-service", which targets TCP port 80 on any Pod with the selector app=grpc-demo label.
The selector determines the set of pods targeted by the service.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: grpc-service
spec:
  selector:
    app: grpc-demo
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer
EOF
Enter fullscreen mode Exit fullscreen mode

Run the following command and wait until you see an external IP assigned. Copy this ip address as we will need this to test in the next step.

watch kubectl get services
Image18
Alt Text

Test the service deployed on AKS

Open the client application developed earlier in VS Code and Replace the ip address with the external ip address viewed in the previous step.

Run the console client by typing in terminal
dotnet run

The console should show whether the product id is available. Change the value of the product id and rerun the application to reconfirm that the application works.

Finally..

Delete the resource group and all the resources in it.
az group delete --name grpc-container-demo

What's next

Microsft Azure also has another service named Azure Redhat Openshift.
Depending on who has implemented Kubernetes and its versions, you will have different tools for container runtime, log management, metrics & monitoring, etc. So, that led to enterprise customers to ask for a consistent version of Kubernetes which can run the same way everywhere from internal datacentres to any cloud. OpenShift is that enterprise deployment layer for Kubernetes in hybrid cloud mode.
Azure Red Hat OpenShift provides a flexible, self-service deployment of fully managed OpenShift clusters jointly operated and supported by Microsoft & Red Hat and offers an SLA of 99.95% availability.
Unlike AKS where Microsoft manages the master nodes and customer only pays for and manages the worker nodes, ARO is dedicated to a given customer and all resources, including the cluster master nodes, run in customer’s subscription.
Our demo code is a part of inventory microservice and a part of supply chain application. Such applications depending on customer preference are deployed either on customer’s internal datacentre or public cloud.
It is better for an application developer to select a deployment platform for such applications which can run in any environment (Bare metal, Virtual Machines, Public clouds) and OpenShift is ideally suited for this and Azure Redhat OpenShift is probably the best option for the same in public clouds.
So, in the next article we look at deploying a series of microservices which are part of a supply chain application on ARO with continuous integration, delivery and deployment.

Discussion (0)

Forem Open with the Forem app