Did you know that AWS IAM has built-in support for some well-known OIDC providers, including Google? Neither did I until I worked on a project that required GCP compute instances to securely access Amazon S3 buckets...
Introduction
Security is paramount in cloud native application design. This is especially true if you have resources running in multiple clouds that have interdependencies. I recently worked on such a project where GCP Compute Instances needed to access data in Amazon S3. For expediency, the GCP team requested static access keys, which I politely (I think ๐ ) refused. Instead, my team and I researched methods for GCP Compute Instances to use STS to dynamically-generate temporary credentials to assume an AWS IAM Role. We lucked out with this finding in the AWS documentation covering Creating Roles for OIDC Federated Identity Providers.
Out of the box, AWS IAM supports OIDC Federation with Amazon Cognito, Amazon.com, Facebook, and Google. It is possible to configure support for other Identity Providers (IdPs) besides these four (IF they offer federated OIDC), but that requires a little extra setup.
After reading the documentation I put together the high-level list of items I needed to configure and where:
- GCP - create an IAM Service Account
- GCP - deploy an Compute Instance
- AWS - create an IAM Policy for Amazon S3 access
- AWS - create an IAM Role with Trust configured to allow a web identity to assume it.
I enjoy pointing and clicking just about as much as you do. So, I used Pulumi to automate the creation of all four cloud resources along with an Amazon S3 bucket to validate that the policy worked properly.
Getting Started
First things first: ensure your AWS and GCP credentials are properly set in your development environment. Next, ensure Pulumi is installed and you can successfully run Pulumi's AWS Getting Started example. NOTE: I've been using Python recently. So, my example code will be in Python.
The Pulumi example only imports the AWS module for you. So, it will be necessary to import the GCP module as well. With Python, this can be accomplished by adding the GCP module to the requirements.txt file in the root of your Pulumi codebase:
pulumi>=3.0.0,<4.0.0
pulumi-aws>=6.0.2,<7.0.0
pulumi-gcp
Then, from the root of your Pulumi codebase, run pip install for the virtual environment generated during the Pulumi getting started example:
venv/bin/pip install -r requirements.txt
Finally, configure your GCP Project in your pulumi settings (NOTE: make sure you use the correct project id when you execute this command!)
pulumi config set gcp:project mahnamahna-muppets-196911
With these prerequisites out of the way, let's take a look at the code!
Imports
In an attempt to be minimalist, I only import the modules that I need.
import pulumi, json
from pulumi_aws import iam, s3
from pulumi_gcp import serviceaccount, compute
Create the GCP IAM Service Account
The first parameter is the name that Pulumi uses to identify the GCP resource it's creating. The account_id is what GCP will call the service account.
aws_service_access_sa = serviceaccount.Account("awsAccessServiceAccount",
account_id="aws-service-access",
display_name="AWS Service Access Service Account")
Create the GCP Compute Instance
With this Pulumi code, I create a simple GCP Compute Instance, and I make sure to assign the service account I created earlier.
aws_service_instance = compute.Instance("awsserviceaccess",
machine_type="e2-micro",
zone="us-east4-c",
boot_disk=compute.InstanceBootDiskArgs(
initialize_params=compute.InstanceBootDiskInitializeParamsArgs(
image="debian-cloud/debian-11",
),
),
network_interfaces=[compute.InstanceNetworkInterfaceArgs(
network="default",
access_configs=[compute.InstanceNetworkInterfaceAccessConfigArgs()],
)],
service_account=compute.InstanceServiceAccountArgs(
email=aws_service_access_sa.email,
scopes=["cloud-platform"],
))
Create the Amazon S3 bucket
Very simple Pulumi code to create an Amazon S3 bucket. PLEASE do not use the force_destroy parameter in production. This is my demo environment. So, I only used it for quick clean-ups.
# NOTE: Do not use the force_destroy option in PRODUCTION
# I only used this setting because this is a demo environment.
aws_bucket = s3.Bucket('gcp-sa-access-bucket', force_destroy=True)
Create the AWS IAM Policy
This Pulumi code creates the AWS IAM Policy to allow access to the Amazon S3 bucket. Note that in Pulumi, we need to use the apply method when accessing the string value of a resource's output. This is due to Pulumi resource outputs being similar to a promise. You can read more in their documentation. TL;DR: you need to use the apply method to run a function that will insert the Pulumi resource output as a string into your code at runtime.
In this example, I am using the apply method with an anonymous function (aka Python Lambda) to insert the bucket's id instead of defining a separate function for this single purpose.
aws_iam_policy_bucket_read = iam.Policy("gcp-sa-access-bucket-read", policy=aws_bucket.id.apply(lambda name:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:HeadBucket"
],
"Resource": [
f"arn:aws:s3:::{name}",
f"arn:aws:s3:::{name}/*"
]
}
]
}
))
Create the AWS IAM Role
I use another Python Lambda here to insert the GCP Service Account's unique id into the trust relationship policy. Notice the Action here is sts:AssumeRoleWithWebIdentity, and the Principal is accounts.google.com
aws_s3_read_only_role = iam.Role("awsS3ReadRole",
assume_role_policy=aws_service_access_sa.unique_id.apply(lambda id:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "accounts.google.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"accounts.google.com:aud": id
}
}
}
]
}),
managed_policy_arns=[
aws_iam_policy_bucket_read.arn,
]
)
Thanks for sticking with me this far. Now, the fun starts as we run the pulumi up command. All the resources are deployed in less than 30 seconds.
To validate that everything worked correctly, I SSH'ed to my GCP Compute Instance, installed the AWS CLI, and ran the following command to generate my temporary credentials:
aws sts assume-role-with-web-identity \
--role-arn arn:aws:iam::867530987654321:role/awsS3ReadRole-1969112 \
--role-session-name "AWSS3access" \
--web-identity-token $(gcloud auth print-identity-token) > assume-role-output
Finally, I exported my AWS access key, secret access key, and session token as environment variables and accessed my S3 content from my GCP Compute Instance!
export AWS_ACCESS_KEY_ID=ASIA**
export AWS_SECRET_ACCESS_KEY=82**
export AWS_SESSION_TOKEN=Fwo**
Even though my project used a GCP Compute Instance, this workflow is applicable to any GCP compute resource than can assume an IAM Service Account like your GKE Kubernetes Pods.
Wrapping Things Up...
In this blog post, we discussed how you can enable GCP compute resources to use the AWS IAM and STS services to securely access the AWS resources for which they have permissions. We used Pulumi to deploy the GCP IAM Service Account and Compute Instance and to deploy the Amazon S3 bucket and IAM Role and Policy. Finally, we demonstrated the GCP Compute Instance accessing the Amazon S3 bucket. If you would like take a look at the code, I hosted it on GitHub.
If you found this article useful, let me know on LinkedIn!
Top comments (0)