DEV Community

Cover image for Using GitLab Managed Kubernetes
Brian Michalski
Brian Michalski

Posted on • Edited on

Using GitLab Managed Kubernetes

In our last journey, we connected Gitlab to a Kubernetes cluster exploring the "Add Kubernetes Cluster" button. Now that our cluster is set up, let's put it to use!

Running Applications

GitLab's Auto DevOps is too magical for me. I don't understand Helm well enough to use it; and I suspect that my use of Bazel's docker rules will cause problems. Instead, we'll manually add a deploy step to update our application.

1. Add a Helm chart.

Helm charts are essentially yaml templates for Kubernetes resources. They allow you to add variables and set/override them when applying the chart... something you might traditionally do using sed or another bash trick.

To create a new Helm chart, run a command like:

mkdir devops; cd devops
helm create myapp
Enter fullscreen mode Exit fullscreen mode

This will create a myapp/ directory with all of the pieces of a Helm template inside. You can then preview the YAML output of this template using a command like:

helm install --dry-run --debug myapp
Enter fullscreen mode Exit fullscreen mode

Helm charts can be pretty intimidating. The meat of the template is in templates/deployment.yaml file. If you want, you can delete all of the fancy templating in this file and replace it with a vanilla yaml for a deployment object.

For simple apps, tweak things as necessary in your values.yaml. I kept doing this, and comparing the --dry-run output to a hand-written deployment yaml until I got it looking sort of correct.

2. Pulling from GitLab's container registry

Auth

If you're using a private container registry, like Gitlab's, you will need to store some login information in a secret in your Kubernetes cluster.

To begin, head to your project settings in GitLab and navigate to the Registry section. Create a Deploy Token and note the username and password.

Deploy Token form

Next, follow this tutorial to upload that secret to your cluster. I ended up using a command like:

kubectl create secret docker-registry regcred \
--docker-server=registry.gitlab.com \
--docker-username=gitlab+deploy-token-123456 \
--docker-password=p@ssw0rdh3r3 \
--docker-email=me@gmail.com
Enter fullscreen mode Exit fullscreen mode

NOTE: You probably need to run this command with a --namespace flag that sets this secret in the same namespace that GitLab picks to run your application. If it's not set, you'll see errors trying to fetch the container.

Helm Chart Tweak

To use this regcred secret, add it to the imagePullSecrets section of your values.yaml file like this:

image:
  repository: registry.gitlab.com/bamnet/project/image
  pullPolicy: IfNotPresent
  tag: ""

imagePullSecrets:
  - name: regcred
Enter fullscreen mode Exit fullscreen mode

3. .gitlab-ci.yml updates

To apply this Helm chart as part of your CI/CD pipeline, add a job to your .gitlab-ci.yml file like the following:

deploy_myapp:
  stage: deploy
  image:
    name: alpine/helm:latest
    entrypoint: [""]
  script:
    - helm upgrade
      --install
      --wait
      --set image.tag=${CI_COMMIT_SHA}
      myapp-${CI_COMMIT_REF_SLUG}
      devops/myapp
  environment:
    name: production
Enter fullscreen mode Exit fullscreen mode

The most important part of this entire job is the environment section. Gitlab only exposes Kubernetes connection information (via environment variables helm and kubectl automatically use) when the deploy stage has an environment set. Without this section, you will get errors connecting to your cluster.

There are 3 parts of the helm upgrade command worth noting:

  1. --set image.tag=${CI_COMMIT_SHA} overrides the tag portion of our deployment.yaml, passing in the git commit hash. This assumes your containers are tagged with the commit that generates them. If you don't do this, consider a static value like latest.
  2. myapp-${CI_COMMIT_REF_SLUG} provides the name for this deployment. If you're deploying from the master branch, this will be myapp-master. This must be unique, so tweak the myapp- prefix if you have multiple applications.
  3. devops/myapp at the end specified the folder where the Helm chart files are located.

Pushing this new gitlab-ci file should trigger an automatic deployment to your Kubernetes cluster. Sit back, relax, and watch the dashboard to see it work.

If this is your first push, be on the lookout for a new namespace to be created.. probably something like gitlabproject-123456-production.

Troubleshooting Tips

  • Locally run helm install --dry-run to see the planned configuration. If it doesn't look right locally, there is no way GitLab is going to get it right.
  • Connect to the Kubernetes Dashboard to see why deployments fail.
  • Make sure your image names and tags match between your registry hosting the things and the deployment yaml trying to create them.
  • Use the --namespace flag to make sure your registry credentials end up in the right namespace.

Monitoring Applications

GitLab has a one-click install of Prometheus. I am a sucker for one-click install buttons and wanted to give it a spin monitoring my Go application.

1. Exporting Metrics

The OpenTelemetry docs and examples are a good starting point. Prometheus needs and HTTP endpoint to grab the metrics from, a very simple Prometheus exporter looks like this.

func initMeter() {
    exporter, err := prometheus.InstallNewPipeline(prometheus.Config{})
    if err != nil {
        log.Panicf("failed to initialize prometheus exporter %v", err)
    }
    http.HandleFunc("/metrics", exporter.ServeHTTP)
    go func() {
        _ = http.ListenAndServe(":2222", nil)
    }()

    fmt.Println("Prometheus server running on :2222")
}

func main() {
    initMeter()
     // Rest of your code here.
Enter fullscreen mode Exit fullscreen mode

2. Adding Annotations.

Gitlab's one-click Prometheus automatically scrapes metrics from any resource that has certain annotations in place which tell it how to scrape. Add the following to your chart.yaml file:

podAnnotations:
  prometheus.io/scrape: "true"
  prometheus.io/path: /metrics
  prometheus.io/port: "2222"
Enter fullscreen mode Exit fullscreen mode

That's it!

Troubleshooting Tips

  • Manually connect to your application and see the exported metrics. Forward the port using kubectl port-forward -n <gitlab-created-namespace> deployments/myapp-master 2222:2222 and point your browser at http://localhost:2222.
  • Manually connect to Prometheus and use the web UI to see what metrics are being scraped and run queries against them. Forward the port using kubectl port-forward -n gitlab-managed-apps service/prometheus-prometheus-server 9090:80 and point your browser to http://localhost:9090.

Top comments (0)