DEV Community

Cover image for A Story of Rusty Containers, Queues, and the Role of Assumed Identity
Peter Nehrer
Peter Nehrer

Posted on • Updated on

A Story of Rusty Containers, Queues, and the Role of Assumed Identity

How to access Amazon Web Services from Rust-based Kubernetes applications using Rusoto and IAM Roles for Service Accounts

I made acquaintance with Rusoto only recently, while building a Rust service that consumed messages from Amazon SQS. I was instantly impressed with this AWS SDK for Rust -- well designed, modular, thoroughly documented, and even more comprehensive than typical AWS SDKs for other languages.

In order to set up my application to use SQS, all I had to do was add rusoto_core and rusoto_sqs crates to my Cargo.toml and create an instance of SqsClient for my target region. Even better, I could set it up with the default region, which causes it to look for the configured region in the usual environment variables or profile configuration files:

use rusoto_core::Region;
use rusoto_sqs::SqsClient;

let client = SqsClient::new(Region::default());
Enter fullscreen mode Exit fullscreen mode

For my containerized application this was ideal, since in Kubernetes it would get its static configuration through environment variables anyway, and I could easily supply them when running locally:

RUST_LOG=debug AWS_REGION=ca-central-1 QUEUE_URL=https://sqs.ca-central-1.amazonaws.com/1234567890/rusoto-sqs-k8s-demo cargo run
Enter fullscreen mode Exit fullscreen mode

Supplying AWS credentials

Similar to how it determines the desired service region, the default Rusoto client uses a discovery algorithm to obtain its AWS credentials; it checks the environment variables, profile configuration files, and even the IAM instance profile if running on an EC2 instance.

If any of the supplied credentials are configured to expire periodically, this provider would even refresh them as needed!

As expected, running my code locally was a breeze, since the client used my default AWS profile credentials, which were granted access to the SQS queue that I had set up for testing. I tried sending a test message to my queue using AWS CLI:

aws sqs send-message --queue-url https://sqs.ca-central-1.amazonaws.com/1234567890/rusoto-sqs-k8s-demo --message-body 'Hello world!'
Enter fullscreen mode Exit fullscreen mode

Success! As my application's debug log indicated, it was able to receive and process the message:

{"timestamp":"Aug 23 23:58:09.780","level":"DEBUG","target":"rusoto_sqs_k8s_demo","fields":{"message":"Message { attributes: None, body: Some(\"Hello world!\"), md5_of_body: Some(\"86fb269d190d2c85f6e0468ceca42a20\"), md5_of_message_attributes: None, message_attributes: None, message_id: Some(\"d1ec1019-6398-4c75-b320-4a1e653e63ef\"), receipt_handle: Some(\"AQEBDrxJ...fnjddGjP8J6zvFKtw==\") }","log.target":"rusoto_sqs_k8s_demo","log.module_path":"rusoto_sqs_k8s_demo","log.file":"src/main.rs","log.line":129}}
Enter fullscreen mode Exit fullscreen mode

In order to demonstrate the approach outlined in this article, I created a small demonstration project that's available on GitHub. Most of the code snippets and log outputs in this article are taken from this project. Please note that it is very minimalistic, focusing on the subject at hand and glossing over other important aspects, such as error handling, testing, and deployment. It could even use a more efficient Docker build! Perhaps more on that in another article...

Detour: Kubernetizing your application

Applications running in Kubernetes differ from typical CLI-based programs in a number of ways. For instance, they have different configuration and logging needs. Furthermore, they must support readiness and liveness probes, and gracefully shut down in response to the TERM signal. Finally, they must run in a Docker container.

Logging

Granted, log entries like the one shown earlier aren't as human-readable as those generated by e.g., the pretty_env_logger crate; however, because they are JSON, they are easily consumable by various application monitoring tools.

For Kubernetes applications that use tokio to implement asynchronous servers I typically reach for the tracing and tracing-subscriber crates. This pair gives me the ability to use the log crate as I normally would -- for logging -- as well as async-aware tracing, should the need arise. As a bonus, it provides JSON-formatted log output.

use tracing_subscriber::{
    fmt::Subscriber as TracingSubscriber,
    EnvFilter as TracingEnvFilter,
};

...

TracingSubscriber::builder()
    .with_env_filter(TracingEnvFilter::from_default_env())
    .json()
    .init();
Enter fullscreen mode Exit fullscreen mode

Configuration

For better user experience with a CLI-based application, I would normally utilize a crate such as gumdrop in order to support argument-based configuration. However, there is no need for that in Kubernetes as the application primarily receives its configuration through environment variables passed down to it through Docker, config maps mounted as files on filesystem volumes, or dynamically through an external service (e.g., Consul).

The config create does the trick here -- it supports these types of configuration sources while allowing me to read configuration values into a typed struct:

use config::{
    Config,
    Environment,
};

use std::net::SocketAddr;

#[derive(Debug, Deserialize)]
struct Settings {
    #[serde(default = "Settings::default_status_probe_addr")]
    status_probe_addr: SocketAddr,
    queue_url: String,
}

impl Settings {
    fn default_status_probe_addr() -> SocketAddr {
        "0.0.0.0:8080"
            .parse()
            .expect("default status probe address")
    }
}

...

let mut cfg = Config::new();
cfg.merge(Environment::new())?;

let settings: Settings = cfg.try_into()?;
Enter fullscreen mode Exit fullscreen mode

Build info

For easier debugging, I like to have the application log its current version at startup. This can be easily accomplished with the help of the built crate, which projects various build-time data, such as the current git hash, into constants, which can in turn be used to create the version string:

fn version() -> String {
    format!(
        "{} {} ({}, {} build, {} [{}], {})",
        env!("CARGO_PKG_NAME"),
        env!("CARGO_PKG_VERSION"),
        built_info::GIT_VERSION.unwrap_or("unknown"),
        built_info::PROFILE,
        built_info::CFG_OS,
        built_info::CFG_TARGET_ARCH,
        built_info::BUILT_TIME_UTC,
    )
}
Enter fullscreen mode Exit fullscreen mode

Readiness and liveness probes

Kubernetes must be able to determine the health of each container in order to replace it in case the application runs into some unexpected trouble. To do that, the container can be configured with a readiness (checked upon startup) and a liveness probe (checked periodically), which tells Kubernetes when the container is ready to receive traffic as well as alive and healthy, respectively.

For the demo application, I chose a simple TCP connection probe -- as long as the application accepts the controller's connection request on the specified port, the probe is deemed successful:

use tokio::net::TcpListener;

...

let mut status_listener = TcpListener::bind(&settings.status_probe_addr).await?;
let mut probes = status_listener.incoming();

...

while let Some(_) = probes.next().await {
    ...
}
Enter fullscreen mode Exit fullscreen mode

Signals

It's a good practice to have your application handle POSIX signals, especially the TERM signal, which the container host sends to the application upon graceful shutdown. The application should use that opportunity to finish processing any outstanding requests and clean up any open resources:

use futures::stream::SelectAll;
use tokio::signal::unix::{
    signal,
    SignalKind,
};

...

let mut signals = SelectAll::new();
signals.push(signal(SignalKind::interrupt()).expect("failed to register the interrupt signal"));
signals.push(signal(SignalKind::quit()).expect("failed to register the quit signal"));
signals.push(signal(SignalKind::terminate()).expect("failed to register the terminate signal"));
// ignore SIGPIPE
let _sigpipe = signal(SignalKind::pipe()).expect("failed to register the pipe signal");

...

if let Some(_) = signals.next().await {
    // Clean up and exit
    ...
}
Enter fullscreen mode Exit fullscreen mode

Docker build

In order to run in Kubernetes, the application must be packaged as a Docker image. One of the easiest way to accomplish this is to create a multi-stage Dockerfile using Rust MUSL Builder to build the application and Alpine Linux as the base of the target image.

Rust MUSL Builder comes pre-installed with the desired Rust toolchain. Furthermore, it produces statically-linked builds, which comes in handy when running the binaries in Alpine Linux.

After verifying the formatting, running clippy, unit tests, and finally the release build (with the desired maximum logging level), the target binary is copied into Alpine Linux:

RUN cargo fmt --all -- --check
RUN cargo clippy --all -- -D warnings
RUN cargo test --all

ARG debug

ENV BUILD_FEATURES=${debug:+"--features log-level-trace"}

RUN cargo build --release --no-default-features $BUILD_FEATURES
Enter fullscreen mode Exit fullscreen mode

Lastly, for better process handling, I recommend installing tini and having it spawn the actual application process:

RUN apk --no-cache add tini
...
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["/app/rusoto-sqs-k8s-demo"]
Enter fullscreen mode Exit fullscreen mode

Finally, running the build takes a good while:

docker build -t ecliptical/rusoto-sqs-k8s-demo .
Sending build context to Docker daemon  140.3kB
Step 1/17 : FROM ekidd/rust-musl-builder:nightly-2020-08-15 AS build
 ---> c39cf12c752f
 ...
  ---> ea152bab848a
Successfully built ea152bab848a
Successfully tagged ecliptical/rusoto-sqs-k8s-demo:latest
Enter fullscreen mode Exit fullscreen mode

Deploying into Kubernetes

After the resounding success of the application's trial run on my laptop, I was ready to press ahead and deploy it into Kubernetes!

Your organization's approach for deploying applications into Kubernetes will likely vary. For instance, one might use the Amazon Elastic Container Registry (ECR) to host the Docker images, and Helm to simplify the various deployment descriptors and the actual roll-out procedures. However, for demonstration purposes a GitHub Package Registry and plain Kubernetes deployment descriptors applied using kubectl will suffice.

Docker registry

To allow Kubernetes to deploy a Docker container, it must be able to download the specified image from a Docker registry. For the demo project, I set up a GitHub Package Registry at docker.pkg.github.com/ecliptical/rusoto-sqs-k8s-demo and pushed the tagged build into it:

docker tag ecliptical/rusoto-sqs-k8s-demo docker.pkg.github.com/ecliptical/rusoto-sqs-k8s-demo/rusoto-sqs-k8s-demo:v1
docker push docker.pkg.github.com/ecliptical/rusoto-sqs-k8s-demo/rusoto-sqs-k8s-demo:v1
Enter fullscreen mode Exit fullscreen mode

Hopefully, your organization's CI/CD system takes care of running the build and pushing it to the Docker registry!

Deployment descriptor

In order to deploy the application into Kubernetes, kubectl needs a deployment.yaml file that describes the various aspects of the deployment, including the containers and their configuration. This is where you supply static values for the various environment variables either directly or as secrets. In practice, these are often managed by the SRE team or other authorized personnel:

...
containers:
- name: rusoto-sqs-k8s-demo
  image: "docker.pkg.github.com/ecliptical/rusoto-sqs-k8s-demo/rusoto-sqs-k8s-demo:v1"
  imagePullPolicy: IfNotPresent
...
  env:
  - name: "AWS_REGION"
    value: ca-central-1
  - name: "AWS_ACCESS_KEY_ID"
    valueFrom:
      secretKeyRef:
        name: rusoto-sqs-k8s-demo-secrets
        key: aws_access_key_id
  - name: "AWS_SECRET_ACCCESS_KEY"
    valueFrom:
      secretKeyRef:
        name: rusoto-sqs-k8s-demo-secrets
        key: aws_secret_access_key
  - name: "QUEUE_URL"
    valueFrom:
      secretKeyRef:
        name: rusoto-sqs-k8s-demo-secrets
        key: queue_url
  - name: "RUST_LOG"
    value: info
...
imagePullSecrets:
- name: regsecret
...
Enter fullscreen mode Exit fullscreen mode

Secrets

For the above deployment descriptor to work, there must a be a special regsecret and a generic rusoto-sqs-k8s-demo-secrets secret set up in the target namespace. For the demo, you can create the former like so (after replacing your GitHub credentials):

AUTH=$(echo -n YOUR_GITHUB_USERNAME:YOUR_GITHUB_API_TOKEN | base64)
echo '{"auths":{"docker.pkg.github.com":{"auth":"'${AUTH}'"}}}' | kubectl create secret -n demo generic regsecret --type=kubernetes.io/dockerconfigjson --from-file=.dockerconfigjson=/dev/stdin
Enter fullscreen mode Exit fullscreen mode

And the latter (partially):

kubectl -n demo create secret generic rusoto-sqs-k8s-demo-secrets --from-literal=queue_url=https://sqs.ca-central-1.amazonaws.com/1234567890/rusoto-sqs-k8s-demo
Enter fullscreen mode Exit fullscreen mode

For the actual application I asked the Operations Team to kindly set up the registry secret and issue a new set of AWS credentials for my new-fangled application with relevant permissions for the SQS queue in question...

Not so fast!

It turns out that issuing static AWS credentials (that is, the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY pair) to Kubernetes applications isn't a great idea! The preferred way is to have the application assume a designated IAM role, which can then be granted various permissions as needed. In contrast to access-key based credentials, which are issued to a user, IAM roles may be scoped specifically to the set of permissions that the application needs, thus improving your system's security posture through the principle of least privilege.

For "classic" applications running in EC2, this can be done through IAM instance profiles.

But what about Kubernetes?

Fine-grained IAM Roles for Service Accounts

Luckily, Kubernetes applications can take advantage of fine-grained IAM roles for service accounts. This approach combines Kubernetes' Role-Based Access Control (RBAC) with Amazon's Identity and Access Management (IAM). The details of this mechanism are somewhat involved -- it takes advantage of the fact that Kubernetes can issue projected service account tokens for pods. Since these are valid OIDC JWTs, Amazon's Secure Token Service (STS) can use them for authentication thanks to its support for OIDC federation. Thus, a Kubernetes pod with a specific service account may be linked to an IAM role through STS. This ends up being relatively straightforward particularly in Amazon's Elastic Kubernetes Service (EKS), since its control plane takes care of automatically provisioning, injecting, and periodically updating the necessary environment variables and projected filesystem volume. However, the same web-hook based mechanism can be implemented in other environments.

The bottom line -- with a little bit of additional Kubernetes configuration, my application can get access to automatically managed, periodically refreshed AWS credentials.

Creating IAM role

Your organization's policies will likely determine how to go about provisioning an IAM role for your application. For this demo, you can just use the AWS CLI:

aws iam create-policy --policy-name RusotoSQSK8sDemoConsumer --policy-document '{"Version": "2012-10-17", "Statement": [{"Effect": "Allow", "Action": ["sqs:DeleteMessage", "sqs:GetQueueUrl", "sqs:ChangeMessageVisibility", "sqs:DeleteMessageBatch", "sqs:ReceiveMessage", "sqs:GetQueueAttributes", "sqs:ChangeMessageVisibilityBatch"], "Resource": ["arn:aws:sqs:*:1234567890:*"]}]}'
Enter fullscreen mode Exit fullscreen mode

This creates a RusotoSQSK8sDemoConsumer role with enough permissions to receive and process messages in any SQS queue in the given account.

OIDC provider setup

Depending on your particular environment, the steps for setting up an Open ID Connect provider for your Kubernetes cluster will vary. If using EKS, you can create the provider and configure your cluster to use it in one step with the help of eksctl:

eksctl utils associate-iam-oidc-provider --cluster rusoto-sqs-demo --approve
[ℹ]  eksctl version 0.26.0
[ℹ]  using region ca-central-1
[ℹ]  will create IAM Open ID Connect provider for cluster "rusoto-sqs-demo" in "ca-central-1"
[✔]  created
Enter fullscreen mode Exit fullscreen mode

Kubernetes service account

Next, you need a Kubernetes service account annotated with the ARN of the previously provisioned IAM role, which you can then assign to your application pods.

Here again, eksctl makes this process a one-step operation:

eksctl create iamserviceaccount \
                --name rusoto-sqs-consumer \
                --namespace demo \
                --cluster rusoto-sqs-demo \
                --attach-policy-arn arn:aws:iam::123456789:policy/RusotoSQSK8sDemoConsumer \
                --approve
[ℹ]  eksctl version 0.26.0
[ℹ]  using region ca-central-1
[ℹ]  1 task: { 2 sequential sub-tasks: { create IAM role for serviceaccount "demo/rusoto-sqs-consumer", create serviceaccount "demo/rusoto-sqs-consumer" } }
[ℹ]  building iamserviceaccount stack "eksctl-rusoto-sqs-demo-addon-iamserviceaccount-demo-rusoto-sqs-consumer"
[ℹ]  deploying stack "eksctl-rusoto-sqs-demo-addon-iamserviceaccount-demo-rusoto-sqs-consumer"
[✔]  created serviceaccount "demo/rusoto-sqs-consumer"
Enter fullscreen mode Exit fullscreen mode

Deployment descriptor changes

Finally, you can incorporate the required changes into your pods' deployment descriptor:

...
containers:
- name: rusoto-sqs-k8s-demo
...
  env:
  - name: "AWS_REGION"
    value: ca-central-1
  - name: "QUEUE_URL"
    valueFrom:
      secretKeyRef:
        name: rusoto-sqs-k8s-demo-secrets
        key: queue_url
  - name: "RUST_LOG"
    value: info
...
serviceAccountName: rusoto-sqs-consumer
securityContext:
  fsGroup: 65534
...
Enter fullscreen mode Exit fullscreen mode

Two sets of changes stand out:

  1. You no longer need the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables -- they've been removed.
  2. The serviceAccountName property has been added specifying the desired service account name, and a securityContext section with the fsGroup property -- this instructs Kubernetes to mount filesystem volumes with user nobody as their owner (id 65534 in Linux), thus allowing the unprivileged application process to read the automatically injected web token file.

All in all, when the pod is deployed, the EKS control plane will automatically inject two new environment variables and mount a new filesystem volume containing the periodically refreshed OIDC JWT:

containers:
- name: rusoto-sqs-k8s-demo
...
  env:
  - name: AWS_ROLE_ARN
    value: arn:aws:iam::1234567890:role/eksctl-rusoto-sqs-demo-addon-iamserviceaccou-Role1-1UNI8CG3YVFKN
  - name: AWS_WEB_IDENTITY_TOKEN_FILE
    value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
...
  volumeMounts:
  - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
      name: aws-iam-token
      readOnly: true
  volumes:
  - name: aws-iam-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: sts.amazonaws.com
          expirationSeconds: 86400
          path: token
...
Enter fullscreen mode Exit fullscreen mode

Rusoto credentials, revisited

One outstanding issue remained -- how is the application going to find and use these injected credentials?

Rusoto is quite flexible -- the rusoto_credential crate allows me to implement my own credential providers, which could obtain and refresh my application's AWS credentials from an arbitrary external source. With this in mind I set out to investigate what it would take to read the injected AWS_WEB_IDENTITY_TOKEN_FILE and call STS's AssumeRoleWithWebIdentity to exchange the OIDC JWT for AWS role credentials. Whew!

Invariably, the trail of breadcrumbs lead me to the rusoto_sts crate, which exposes the Amazon STS API. As I said before, Rusoto is modular and doesn't force you to package code that you won't need. After perusing the documentation for a bit in order to devise my plan of attack, there -- sitting unassumingly at the end of the list of exported structs -- I found the WebIdentityProvider.

I couldn't believe my luck!

With a single method call I could simply instantiate a different kind of credentials provider that would read the injected environment variables and files and make the appropriate STS calls to authenticate my AWS API calls:

use rusoto_core::{
    region::Region,
    request::HttpClient,
};

use rusoto_credential::AutoRefreshingProvider;
use rusoto_sqs::SqsClient;
use rusoto_sts::WebIdentityProvider;

...

let sqs_http = HttpClient::new()?;
let cred_provider = AutoRefreshingProvider::new(WebIdentityProvider::from_k8s_env())?;
let client = SqsClient::new_with(sqs_http, aws_client, Region::default());
Enter fullscreen mode Exit fullscreen mode

Onward into Kubernetes, this time for real

With all required changes finally in place, and the Docker image rebuilt and pushed into the registry, I was finally able to deploy the application. For the demo application, the equivalent procedure would simply be:

kubectl apply -f deployment.yaml
deployment.apps/rusoto-sqs-k8s-demo created
Enter fullscreen mode Exit fullscreen mode

After receiving similarly positive yet rather anti-climactic output, I tailed the pods' logs to see if another test message sent to the target SQS queue would be picked up:

kubectl -n demo logs -f -l app.kubernetes.io/name=rusoto-sqs-k8s-demo
{"timestamp":"Aug 23 23:57:51.508","level":"INFO","target":"rusoto_sqs_k8s_demo","fields":{"message":"rusoto-sqs-k8s-demo 0.1.0 (4b04653, release build, linux [x86_64], Sun, 23 Aug 2020 19:12:11 +0000)","log.target":"rusoto_sqs_k8s_demo","log.module_path":"rusoto_sqs_k8s_demo","log.file":"src/main.rs","log.line":189}}
{"timestamp":"Aug 23 23:57:51.488","level":"INFO","target":"rusoto_sqs_k8s_demo","fields":{"message":"rusoto-sqs-k8s-demo 0.1.0 (4b04653, release build, linux [x86_64], Sun, 23 Aug 2020 19:12:11 +0000)","log.target":"rusoto_sqs_k8s_demo","log.module_path":"rusoto_sqs_k8s_demo","log.file":"src/main.rs","log.line":189}}
{"timestamp":"Aug 23 23:57:51.690","level":"INFO","target":"rusoto_sqs_k8s_demo","fields":{"message":"rusoto-sqs-k8s-demo 0.1.0 (4b04653, release build, linux [x86_64], Sun, 23 Aug 2020 19:12:11 +0000)","log.target":"rusoto_sqs_k8s_demo","log.module_path":"rusoto_sqs_k8s_demo","log.file":"src/main.rs","log.line":189}}

{"timestamp":"Aug 23 23:58:09.780","level":"INFO","target":"rusoto_sqs_k8s_demo","fields":{"message":"Message { attributes: None, body: Some(\"Hello world!\"), md5_of_body: Some(\"86fb269d190d2c85f6e0468ceca42a20\"), md5_of_message_attributes: None, message_attributes: None, message_id: Some(\"d1ec1109-6398-4c75-b032-4a1e6536e3ef\"), receipt_handle: Some(\"AQEBDwfG...fnjddGjP8J6zvFKtw==\") }","log.target":"rusoto_sqs_k8s_demo","log.module_path":"rusoto_sqs_k8s_demo","log.file":"src/main.rs","log.line":129}}
Enter fullscreen mode Exit fullscreen mode

Sweet victory!

Have your token-based identity and eat it, too

Ultimately, I decided to support both injected and static AWS credentials in order to make it easier to run the app locally. If the injected credentials are available, then let's use those:

use rusoto_core::{
    region::Region,
    request::HttpClient,
    Client as AwsClient,
};

use rusoto_credential::AutoRefreshingProvider;
use rusoto_sqs::SqsClient;
use rusoto_sts::WebIdentityProvider;
use std::env::var_os;

...

let token_file = var_os("AWS_WEB_IDENTITY_TOKEN_FILE");
let role = var_os("AWS_ROLE_ARN");

let aws_client =
    if token_file.map_or(true, |v| v.is_empty()) || role.map_or(true, |v| v.is_empty()) {
        AwsClient::shared()
    } else {
        let sqs_http = HttpClient::new()?;
        let cred_provider = AutoRefreshingProvider::new(WebIdentityProvider::from_k8s_env())?;
        AwsClient::new_with(cred_provider, sqs_http)
    };

let client = SqsClient::new_with_client(aws_client, Region::default());
Enter fullscreen mode Exit fullscreen mode

Conclusion

Even with Kubernetes at the helm, navigating the cloudy skies can become an arduous endeavor, especially when faced with complex, sensitive issues such as application security and controlling access to AWS resources.

Thankfully, Rusoto provides an easy way to tap into Amazon's support for IAM roles for Kubernetes service accounts, which bridge the Kubernetes and AWS worlds in terms of security and access control.

I am grateful for and continue to be amazed by all the generous contributions of the open-source Rust community, which make it possible for me and many others to build applications using this powerful, modern language platform.

To follow the examples in this article please see the accompanying project on GitHub.


Splash photo by Rinson Chory on Unsplash

Top comments (0)