DEV Community

Cover image for How we build and deploy updated packages with Lerna and Jenkins + K8s deployments
Kaloyan Yosifov
Kaloyan Yosifov

Posted on • Updated on

How we build and deploy updated packages with Lerna and Jenkins + K8s deployments

How we build and deploy updated packages with Lerna and Jenkins + K8s deployments

Quick intro

Our setup for our app is pretty much straight forward. We use GitHub with Jenkins and Kubernetes for deployment.

Project structure

The picture above displays a standard Lerna monorepo example project we created (similar to ours), to give more insights on how everything should look like.

We have three apps:

  • my-first-react-app

  • my-second-react-app

  • my-third-react-app

They are a showcase on how we can build only packages that have changed.

The other two “shared” packages:

  • shared-components

  • shared-configs

They have code that is reused in the three main apps specified above. You can see more details on the repository here.

The problem we wanted to solve was that we build all our packages (4 at the moment) no matter if they have been changed or not. This wastes resources and time to deploy.

Also worth mentioning is that we use the standard packages directory structure for Lerna for our standalone applications. So “App” and “Package” in this context is interchangeable.

Lerna to the rescue

Lerna is a really awesome tool to use when we want a monorepo architecture that simplifies our workflow. As an example, let’s say we want to start the server on all of our packages, so we can go into the browser and check them out. We have three options here:

  • Open the same amount of terminals as we have packages and run the script on them

  • Create a bash script the runs the command on every package

  • Or use lerna run watch which does the same as the above 👆but with one single command and less bash scripting 😁

Also worth mention is that the run command accepts an argument that is referring to a script in the package.json of each app. In our case this is the start script.

start command matches with “start” in package.json

Now we instead of start we can run build and build for production all of the packages. The other cool thing about the run command is that it has a nice argument that runs the command only on packages the have been changed given a commit id in the past. In our case we want to set it to the previous commit. The argument we are referring to is --since .

To get the last commit we use a handy git command git rev-parse HEAD~1 .

And to make it a one liner: lerna run build --since $(git rev-parse HEAD~1) . AWESOME 🍹

With all this in place let’s look the MAGIC.

The Magic

stage('Build updated packages') {
  env.NODE_OPTIONS = '"--max-old-space-size=768"';
  sh './node_modules/.bin/lerna run build --since "$(git rev-parse HEAD~1)" --concurrency 1'
}
Enter fullscreen mode Exit fullscreen mode

That’s it, nothing more nothing less! The magic pretty much is one line if we exclude the stage function (which is the build stage in Jenkins) and setting an environment variable! With this you build only the packages that have changed from the current to the previous commit.

You can ignore the NODE_OPTIONS environment if you have a server with more than 2 GB.

You can also remove the --concurrency 1 option as for us it was useful, because we had a really small server.

Bonus

The blog post ends here, the command above is pretty much what you need to make it build only packages that have changed. But to increase the value we have bonus content! In this next section we are reviewing how we can use what we’ve learned and utilise it to build Docker Images and deploy with Kubernetes.

The things we need to do are separated in steps that are easy to follow. The first couple of steps are tedious but important preparations for us to start building Docker images and deploying to Kubernetes.

Step 1 — Getting changed packages

// get updated packages using lerna's ls command
 packagesChanged = sh(
  script: './node_modules/.bin/lerna ls --since "$(git rev-parse HEAD~1)" | xargs printf \\'%s\\\\n\\'',
  returnStdout: true
 )
 .trim()
 .split('\\n')
 // remove everything before /
 // so that we get the folder name only and not the full path
 .collect{ value -> value.replaceFirst(/@tapro-labs\\//, '') }
 // skip names that are empty
 .findAll{ value -> value != '' && value != null && value != ' ' }
Enter fullscreen mode Exit fullscreen mode

Let’s split the code above piece by piece to understand what is happening.

./node_modules/.bin/lerna ls --since "$(git rev-parse HEAD~1)" | xargs printf \\'%s\\\\n\\'

Output example is:
[@tapro](http://twitter.com/tapro)-labs/package1
[@tapro](http://twitter.com/tapro)-labs/package2
Enter fullscreen mode Exit fullscreen mode

The ls command is similar to run , but instead of running a script in the package.json of an updated package, it returns a list of updated packages.

.split('\\n')
Enter fullscreen mode Exit fullscreen mode

Since the output is just a glob of string returned we split the string into list containing the package names only.

.collect{ value -> value.replaceFirst(/@tapro-labs\\//, ‘’) }
Enter fullscreen mode Exit fullscreen mode

We loop through the list and remove the prefix @tapro-labs . This is just handy so we do not repeat ourselves later when building the Docker images.

.findAll{ value -> value != ‘’ && value != null && value != ‘ ‘ }
Enter fullscreen mode Exit fullscreen mode

We filter out values that are empty or null

Step 2 — Defining our deployments and Docker images

// we use deployment name if our deployment names in Kubernetes do not match in our repository
// Sometimes these names do not match
def deployments = [
  "my-first-react-app": [
    deploymentName: "first-react-app",
    dockerImageName: "tapro-labs/first-react-app",
  ],
  "my-second-react-app": [
    deploymentName: "second-react-app",
    dockerImageName: "tapro-labs/second-react-app",
  ],
  "my-third-react-app": [
    deploymentName: "third-react-app",
    dockerImageName: "tapro-labs/third-react-app",
  ]
]
Enter fullscreen mode Exit fullscreen mode

Since our app names do not match the ones we have as docker images and deployments in k8s, we create a deployments variable where we map based on the apps we have on our repo to the deployments and docker images.

Of course if your app names match you wouldn’t require this map variable.

Step 3 — Building Docker images

def dockerRegistry = env.PRIVATE_DOCKER_REGISTRY
def dockerImagePrefix = env.DOCKER_IMAGE_PREFIX

packagesChanged.each { packageName ->
    stage("Building docker image for ${packageName}") {
      def dockerImageName = deployments.get(packageName).get('dockerImageName');
      sh "docker build -t ${dockerImagePrefix}/${dockerImageName}:${environment} -f ./packages/${packageName}/Dockerfile"
    }
}
Enter fullscreen mode Exit fullscreen mode

Here we loop through the changed packages, fetch the docker image name from our mapping in deployments variable and build the image of each updated app.

Step 4 — Pushing the images to a registry

stage("Push docker image") {
    docker.withRegistry(dockerRegistry) {
        packagesChanged.each { packageName ->
            def app = docker.image(dockerImagePrefix + "/" + deployments.get(packageName).get('dockerImageName') + ":" + environment)

// we push the image with two tags
            // one for the production or staging tag
            // the other one is a tag with the current docker build
            app.push(environment) // @tapro-labs/frontend:production
            app.push(env.BUILD_TAG) // @tapro-labs/frontend:job-132
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Here we just push the docker images to our registry. You can see that the images are tagged with two different labels.

  • First tag is the environment production or staging . Making it the new image for one of those environments.

  • The second tag is the Jenkins build number, so that we can reference on which build the docker image has been built. This allows us to revert to previous builds on the server if the current one is broken. And also helps us when deploying to k8s

Step 5 — Deploying to kubernetes

stage('Deploy to cluster') {
    def kubernetesServerUrl = env.KUBERNETES_SERVER_URL
    def deploymentNamespace = env.DEPLOYMENT_NAMESPACE

    withKubeConfig([credentialsId: 'monorepo-example-kubernetes-service-account', serverUrl: kubernetesServerUrl]) {
      packagesChanged.each { packageName ->
        def deployment = deployments.get(packageName)
        def deploymentName = deployment.get("deploymentName")
        def deploymentDockerImageName = deployment.get("dockerImageName")

        sh "kubectl set image deployment/tapro-labs-${deploymentName} -n ${deploymentNamespace} ${deploymentName}=${dockerImagePrefix}/${deploymentDockerImageName}:${env.BUILD_TAG}"
      }
    }
}
Enter fullscreen mode Exit fullscreen mode

After all the docker stuff, we initiate our deployment to Kubernetes!

We construct the deployment name and docker images variables to pass into the Kubernetes command and after all that we start our deployment:

${deploymentName}=${dockerImagePrefix}/${deploymentDockerImageName}:${env.BUILD_TAG}"
Enter fullscreen mode Exit fullscreen mode

You can see we do not use the production tagged image, but the build number version.

We do this because the initial deployment already has a docker image with the same name and same tag @tapro-labs/frontend:production, so us setting it to the same name will not make k8s restart the deployment and will do nothing. Therefore we use the unique tag @tapro-labs/frontend:job-132 and k8s detects that and upgrades the deployment’s pods to the new docker image.

AND WE ARE DONE!

What we did

  • We got ourselves to a point where we build only one app instead of every app in the Monorepo

  • Building images for updated apps only

  • Deploy those apps to Kubernetes

And that concludes our tutorial on setting up a build on updated packages and deploying them. With this improvement our build times take less time and we do not push unnecessary docker images to the registry.

Let us know if this post was helpful and enjoyable by clicking on the like button!

Cheers 🍻

Top comments (0)