So you want your app to be deployed to your Kubernetes Cluster without caring about any manual step?
I got you covered, it's super simple to create a Continuous Deployment Pipeline with Google Cloud.
For the sake of understanding I choose an NodeJS Express Application, but it also works with react or PHP or any other application layer.
Let's get started:
First we need to give the container builder the rights to access our Kubernetes API. Remember this does not give access to a certain cluster. It just allows the cloudbuilder service account to access our Kubernetes Cluster. So jump to the IAM settings page and look for the cloudbuild service account. If it does not exist you might have to enable the cloudbuild API
We need to add the rights to access the Kubernetes API of our clusters so klick on the pen and look for the following.
I won't go into details on how to setup an express application and introduce testing to it.
I created a repository with the sample application, that we can use
##NodeJS Continuous Deployment done with Container Builder and Kubernetes Engine
To find all the details on how to use this repository please look refere to the corresponding block post on dev.to
To give you an overview, we have a basic express app with 2 backend routes to retrieve users or and user by id.
Also we have a test folder that have tests for the two routes in it. These tests are written with the help of chai and mocha.
If you download the repository you can do the following to see if the tests are working.
npm install npm test
Before the app can run we need the service and the deployment in the Kubernetes Cluster. So let's quickly create a service and a deployment. All of the files you also can find in the repository.
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: server-production labels: app: YOUR-PROJECT-ID spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 template: metadata: labels: app: server spec: containers: - name: server image: gcr.io/PROJECT_ID/REPOSITORY:master imagePullPolicy: Always ports: - containerPort: 3000 env: - name: NODE_ENV value: "production"
The only important part here is that you change the project id and the repository to the path that the repository will have.
After this we only need a service to expose our app to the internet. So quickly apply the service.
kind: Service apiVersion: v1 metadata: name: server spec: selector: app: server ports: - name: server protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer
Now we need to go to the most important part of the whole Setup. The cloudbuild.yaml. There we will define everything for our continuous deployment steps.
The first amazing part will be, that it is possible to put all of the important data in environment variables defined in the build, so you can use the cloud build for different setups.
First we install all of the node dependencies and run the test.
- name: 'gcr.io/cloud-builders/npm' args: ['install'] - name: 'gcr.io/cloud-builders/npm' args: ['run', 'test']
After this we build a docker image with all the repositories files inside and a proper defined environment, so you can easily do a staging deployment as well, or even branch deployment. And we push it to the google image repository.
- name: 'gcr.io/cloud-builders/docker' args: - build - '--build-arg' - 'buildtime_variable=$_NODE_ENV' - '-t' - gcr.io/$PROJECT_ID/$REPO_NAME:$BUILD_ID - '.' - name: 'gcr.io/cloud-builders/docker' args: ['push', 'gcr.io/$PROJECT_ID/$REPO_NAME:$BUILD_ID']
Also important to see, we tag the image with the unique build id to make use of the apply ability of kubernetes, so the image is actually changed.
- name: 'gcr.io/cloud-builders/kubectl' args: - set - image - deployment - $_DEPLOYMENT - $_DEPLOYMENT=gcr.io/$PROJECT_ID/$REPO_NAME:$BUILD_ID env: - 'CLOUDSDK_COMPUTE_ZONE=$_CLUSTER_ZONE' - 'CLOUDSDK_CONTAINER_CLUSTER=$_CLUSTER_NAME'
And finally we set the image in the kubernetes cluster. BAM! Commit hook, automated testing, if successful automated deployment, no downtime.
Now we open the container builder trigger and choose where our code is located.
In the last trigger step we now can add the custom variables. This is the first point where we actually define the cluster. So everything is aggregated at one place and ready to go.
Now we just need to commit to the master and the trigger is started.
YIHA now we have continuous deployment, without setting up any extra services like jenkins, ant or chef. Pretty amazing
I'm thinking of creating a tutorials series from zero to hero in cloud are you interested? drop me a comment!