It is very easy today to establish a connection between a container in Kubernetes and a relational database server, just create a SQL user and open a TCP connection. In cloud computing, in the case of Google Cloud Platform, the equivalent is connecting a container in a Google Kubernetes Engineer cluster to a Cloud SQL instance.
Important points should be taken into account in setting up this connectivity.
Which network topology to choose? How to authenticate and authorize the connection to the Cloud SQL instance? Can I publicly expose a private Cloud SQL instance?
Which architecture could be the most efficient, maintainable and scalable?
Cloud SQL supports the following scenarios for accessing a DB instance in a VPC:
- A Compute Engine instance in the same VPC
- A Compute Engine in a different VPC
- A client application through the internet
- A private network
The scenarios that concern us are the first two:
- GKE and Cloud SQL in the same VPC.
- GKE and Cloud SQL in different VPCs.
In the first one, there is a direct communication between Kubernetes workloads and Cloud SQL instances.
In the second scenario, if private communication is required, a VPN connection between the two VPCs must be established.
Let's discover the possible architectures that could be used to implement each scenario.
In this architecture , our Cloud SQL instance is isolated on its own subnet and accessible through public IP address to only GKE Autopilot that requires access to it. Pods have access to Cloud SQL using Cloud SQL Proxy.
In this architecture , our Cloud SQL instance is isolated on its own subnet and accessible with a private IP to only GKE Autopilot that requires access to it. Pods have access to Cloud SQL using Cloud SQL Proxy.
In this architecture, our Cloud SQL instance is isolated on its own VPC. The two VPCs communicate using Cloud VPN. Pods have access to Cloud SQL using Cloud SQL Proxy.
Each architecture has its own advantages and disadvantages but all apply network isolation best practices for securing sensitive data in Cloud SQL.
Let's explore scenario 2.
In the scenario 2 architecture, the network isolation is achieved using Private services access. We can go more deeply using GKE Workload Identity add-on to provide a pod level defense in depth security strategy at both the networking and authentication layers.
- We associate a Google Service Account with a Kubernetes service account. This service account can then provide IAM permissions to the containers in any pod that uses that service account.
- Network policies to define rules that allow inbound and outbound network traffic to and from pods.
We could implement the same security pattern with the scenario of Cloud VPN.
Now that we have a clear idea of the concepts, let's implement this architecture.
The overall architecture that we will implement during this series of articles is as follows:
During this section of the workshop:
- We will create a VPC with 2 subnets
- web subnet for GKE Autopilot.
- data subnet for the Cloud SQL instance.
- NAT gateway attached to web subnet, but not on data subnet as Cloud SQL doesn't need to access the public internet.
- GKE Autopilot.
- A highly available Cloud SQL MySQL Instance.
- We will create an annotated Kubernetes service account with Google Service Account that has the necessary permission to connect to the Cloud SQL instance.
- Cloud SQL proxy.
- A Web application. A Kubernetes deployment to connect with our MySQL database with:
The series is divided into five parts:
- Configuring an isolated network in Google Cloud
- Creating a GKE Autopilot cluster using Terraform
- Securing sensitive data in Cloud SQL
- Securing the connectivity between a GKE application and a Cloud SQL database
In this first part, we discussed possible scenarios for securing communication between GKE workloads and Cloud SQL databases. In the next section, we'll implement our network stack using Terraform.