DEV Community

BobFerris
BobFerris

Posted on

CockroachDB and GCP: Assuming Roles

Introduction

In a previous blog I detailed how we can use AUTH=implicit authorization to facilitate access to bulk operations such as backups, bulk imports and changefeeds (change data capture) in CockroachDB. One of the drawbacks for AUTH=implicit is that we have to provide extra permissions to cloud storage to the service account running our Cockroach nodes in GKE or Compute Engine. This may violate security practices for some enterprises and not adhere to the principle of least privilege.

New with CockroachDB v22.2 functionality has been added to allow Cloud Storage assume role. The official Cockroach documentation for assuming roles and GKE workload identity can be found here. Let’s get started and see it in action.

Details

Let’s start out by creating a new bucket. I’ll name our bucket ferris-crl-backups. This will be the target for our backups.

We’ll also create a new service account named ferris-cockroach-backups. We will give this service account permissions to our new cloud storage bucket and this will be the service account that is assumed by the service account used to run our CockroachDB cluster.

Next we’ll need to create a new custom role. To follow Google best practices we will only give this role the permissions necessary to run backups to Cloud Storage. We name the custom role CockroachDB Backups and assign it the three permissions required by Cockroach Backups:

storage.objects.create, storage.object.get and storage.objects.list

Image description

Assign this new custom role to our new service account ferris-cockroach-backups.

Image description

Finally we need to give the service account ferris-cockroach-backups the service account web token creator role for our compute engine service account. This is what allows our ferris-cockroach-backups service account the ability to generate oauth tokens for our compute engine service account.

Image description

With all this configuration now completed, we can do the easy part - run a CockroachDB backup command to backup our movr database to our bucket. Note that we specify AUTH=implicit and the ASSUME_ROLE syntax.

BACKUP DATABASE movr INTO 'gs://ferris-crl-backups/?AUTH=implicit&ASSUME_ROLE=ferris-cockroach-backups@cockroach-ferris.iam.gserviceaccount.com';
Enter fullscreen mode Exit fullscreen mode

We get the following output indicating success!

        job_id       |  status   | fraction_completed | rows | index_entries | bytes
---------------------+-----------+--------------------+------+---------------+---------
  835404627160268803 | succeeded |                  1 | 2592 |             0 | 385657
(1 row)
Enter fullscreen mode Exit fullscreen mode

How do we know it worked?

Let’s take a look if we successfully authenticated with the correct service account. To do this, we need to look at Google Logging. First make sure that Data Access Logging is enabled under IAM & Admin → Audit Logs. We need to enable Data Read and Data Write for Cloud Storage. When the Cockroach backup job is run with Data Access Logging enabled we can view the results in Logs Explorer. In the screenshot below we can see that the authenticated principal is our ferris-cockroach-backups service account and that we were delegated from our compute engine service account which is the result we are looking for!

Image description

Conclusion

In this blog we’ve walked through how to assume Service Account roles to run bulk operations such as backups in CockroachDB. Thanks for reading!

Top comments (0)