DEV Community

Cover image for πŸ‡ΊπŸ‡Έ πŸ‡ͺπŸ‡Ί Multi-cloud Monitoring, Logging and DevOps Patterns
Chabane R. for Stack Labs

Posted on • Updated on

πŸ‡ΊπŸ‡Έ πŸ‡ͺπŸ‡Ί Multi-cloud Monitoring, Logging and DevOps Patterns

Hello again !

We saw in the part 1 how to build a network infrastructure and implementing security patterns between Scaleway Elements and Google Cloud.

In this part 2, we will discuss the monitoring/logging and DevOps architectures that could be applied to both cloud providers. We will also start to architecture our proof of concept.

Logging

We can gain visibility into Scaleway resources. There is two supported ways in Google Cloud [1]:

  • Using the BindPlane tool from Blue Medora to ingest logs from Scaleway.
  • Using the Cloud Logging API directly from Scaleway applications or by using a custom agent.

The second option is illustrated in the following diagram.

Alt Text

  • Logs are collected in Scaleway,
  • The logs are sent to Cloud Logging API through the secured VPN connection,
  • A VPC Service Control and a Private Google Access are added to secure the communication between Scaleway and Cloud Logging APIs,
  • Logs are centralised in a specific project in Google Cloud.

Monitoring

We can get metrics into Monitoring in the following two ways [2]:

  • Using the BindPlane tool from Blue Medora to ingest metrics from Scaleway.
  • Using OpenCensus to write to the Cloud Monitoring API.

The second option is illustrated in the following diagram.

Alt Text

  • Metrics are collected in Scaleway,
  • The metrics are sent to Cloud Monitoring API through the secured VPN connection,
  • A VPC Service Control and a Private Google Access are added to secure the communication between Scaleway and Cloud Monitoring APIs,
  • Metrics are centralised in a specific project in Google Cloud.

DevOps

Many DevOps tools exist in the market. Let's take Gitlab as an example to deploy Scaleway resources with Gitlab runners.

For production use, I commonly recommend customers to use specific runners with jobs running in a GKE cluster.

In the following example, we illustrate the deployment of Scaleway resources and kubernetes workloads using GitOps practices. [3]

Alt Text

  • The Gitlab runner job has a KSA which is bound with a vault role that provides access to Scaleway credentials, [4]
  • The Scaleway resources are deployed using terraform plans located in infra repo,
  • A new docker image is built after each git tag. The docker image is published in a centralized docker registry on Google Cloud,
  • The docker image version in Kubernetes manifests is edited using Kustomize and an env repo pipeline is triggered from the image repo pipeline.
  • The kubernetes workloads are updated with the new docker image version using a GitOps tool like ArgoCD.
  • Kapsule cluster has storage object viewer permission to pull images from Google Cloud.

Cost

Segmented architecture being the best suited to our case, we need to control the network traffic to not incur excessive egress charges because half the application ends up left and the other half on the right. [5]

  • There is no charge for ingress traffic between both cloud providers [6][7]. However there may be a charge for GCP resource that processes ingress traffic like Load balancers.
  • In some Scaleway services like Kapsule, we are not charged for egress traffic.

In the previous architectures we used two types of communications:

  • HTTPS connection via GCP Load balancer service using Internet NEG feature.
  • VPN connection.

In both, we will be charged on GCP Egress traffic [8][9].

Demo

We could implement any of the segmented architectures presented in these two posts as a proof of concept.

I chose the DevOps architecture because it is a common architecture used in multi-cloud (and because it's easy to explain 😊).

We will see in the next parts, step by step, how we can implement a DevOps platform in a multi-cloud context. The following architecture is a simplified version of the GitOps integration explained previously.

Alt Text

Let's split the demonstration into four parts:

In a multi-cloud environment, it's important to use the same technologies on each cloud.

As a Stacker, I have a favorite stack for deploying cloud infrastructure:

  • GitLab as CI/CD tool,
  • Terraform for infrastructure as code,
  • Kubernetes to deploy Docker images,
  • Vault to store external credentials,
  • ArgoCD as GitOps tool.

Conclusion

In this post, we discussed the monitoring/logging and DevOps architectures that could be applied to both cloud providers. We also talked about network traffic cost. We finished by laying the foundation stone of our POC.

In the part 3, we will see how to build a DevOps platform in Google Cloud with GitLab and Kubernetes.

Documentation

[1] https://cloud.google.com/solutions/logging-on-premises-resources-with-blue-medora
[2] https://cloud.google.com/solutions/monitoring-on-premises-resources-with-blue-medora
[3] https://www.weave.works/blog/gitops-modern-best-practices-for-high-velocity-application-development
[4] https://www.vaultproject.io/docs/auth/kubernetes
[5] https://architectelevator.com/cloud/hybrid-multi-cloud/
[6] https://cloud.google.com/vpc/network-pricing#general
[7] https://www.scaleway.com/en/pricing/
[8] https://cloud.google.com/vpc/network-pricing#vpn-pricing
[9] https://cloud.google.com/vpc/network-pricing#lb

Top comments (0)