DEV Community

John Potter
John Potter

Posted on • Updated on

SQS & Kubernetes Pods: The Quick and Dirty Guide to Read/Write Permissions

So, you've got some containers running in Kubernetes and you want them to talk to an SQS queue? You're in the right place. This guide will show you how to give your Kubernetes pods the keys to the SQS kingdom—read and write permissions, to be exact.

Prerequisites
Step 1: IAM Roles and Permissions
Step 2: Kubernetes Service Account
Step 3: Deploy Your Pods
Step 4: Verify Access
Step 5: Troubleshoot

Prerequisites

AWS Account:

  • If you don't have one, sign up. You'll be using AWS for the SQS part.

Kubernetes Cluster:

  • Make sure you've got a cluster up and running. You can use cloud services like AWS EKS, GCP's GKE, or do it the old-school way on your own machines.

kubectl Installed:

  • This is the command-line tool for Kubernetes. You'll need it for deploying and managing your pods.

AWS CLI Installed:

  • Useful for setting up and managing SQS and IAM roles.

Basic Know-How:

  • You should be familiar with basic Kubernetes concepts like pods, deployments, and service accounts. Some AWS knowledge wouldn't hurt either.

Editor:

  • Any text editor for writing YAML files for Kubernetes and JSON policies for AWS.

Step 1: Create an SQS queue

Let's create an SQS queue that'll hold our messages or jobs.

Log in to AWS Console:

  • Open your browser, head to the AWS Console, and log in.

Navigate to SQS:

  • In the "Services" dropdown, find "SQS" and click on it.

Create New Queue:

  • Hit the "Create New Queue" button.

Choose Queue Type:

  • You'll get two types—Standard and FIFO. Pick one based on your needs.

Name the Queue:

  • Give your queue a unique name.

Configure Settings:

  • You'll see some optional settings like message retention and delivery delay. Adjust these as needed.

Set Permissions:

  • By default, only the account owner has full access. You can change this if you need to.

Review and Create:

  • Once you're happy with the settings, click "Create Queue".

Grab the URL:

  • After creating, you'll get a URL for your queue. Save this; you'll need it later.

Image description

Step 1: IAM Roles and Permissions

IAM roles determine who gets to do what in the AWS sandbox. They act as a set of keys that you give to your services or users to let them access other AWS services like SQS. Setting up IAM roles defines what each part of your setup is allowed to do. Stick around to see how we can create one specifically for our Kubernetes pods to read and write to SQS.

Create an IAM Role for Kubernetes

Now, let's get our hands dirty and create an IAM role specifically tailored for our Kubernetes pods, so they can chat with SQS.

Log into AWS Console:

  • If you're not already there, log in.

Go to IAM:

  • Navigate to the IAM section from the "Services" dropdown.

Roles in the Sidebar:

-
On the left sidebar, click "Roles," then hit the "Create role" button.

Select Service:

  • Choose "EKS" if you're using AWS's Kubernetes service, or "EC2" if you're running Kubernetes on EC2 instances. Hit "Next."

Skip Permissions:

  • For now, skip the permissions tab and hit "Next."

Name the Role:

  • Give your role a name and a description if you like. Then click "Create role."

Attach Policies for SQS Read/Write

Next up, we'll attach the right permissions to our IAM role so our Kubernetes pods can read from and write to our SQS queue.

Find Your New Role:

  • Back in the "Roles" list, find the role you just created and click on it.

Attach Policies:

  • Click the "Attach policies" button.

Search for SQS:

  • In the search bar, type "SQS" to filter the policies.

Select Policies:

  • Choose the policies that give read and write access to SQS. Usually, these are called "AmazonSQSFullAccess" or you can create a custom policy for finer control.

Attach:

  • After selecting, click the "Attach policy" button.

Note: Avoid overly permissive policies like AmazonSQSFullAccess or AmazonS3FullAccess. These give more access than needed, which could be risky. Stick to the principle of least privilege—only grant what's necessary for the task at hand.

  • IAM Wildcards: Avoid using asterisks (*) in your IAM policies, which grant all permissions to a service.

  • Root User: Never attach policies to the root AWS account. Always use IAM roles or specific users.

  • Open Security Groups: Don't allow inbound traffic from 0.0.0.0/0 unless necessary for the application.

  • Public Access: Don't make your SQS queue or other resources public.

  • Hardcoded Credentials: Never put AWS credentials directly in code or containers. Use roles and environment variables.

  • Unused Policies: Regularly review and remove unused IAM policies and roles.

Step 2: Kubernetes Service Account

Now that our IAM role is all set, let's switch gears to Kubernetes and create a service account. This will be the glue that connects our pods to the AWS permissions we just set up.

Create a Kubernetes Service Account

Open Terminal:

  • Fire up your terminal where kubectl is configured to interact with your cluster.

Create YAML File:

  • Make a new YAML file, say my-service-account.yaml, and add the following content to define your service account:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-sqs-service-account
Enter fullscreen mode Exit fullscreen mode

Apply the YAML:

  • Run the command kubectl apply -f my-service-account.yaml to create the service account in your cluster.

Link the IAM Role to Service Account

With our Kubernetes service account in place, it's time to link it to the IAM role we created earlier. This is the magic step that lets our pods access SQS.

AWS Annotate:

  • You need to annotate the service account with the IAM role's ARN. Update your YAML to include an annotations field:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-sqs-service-account
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::[Your-AWS-Account-ID]:role/[Your-IAM-Role-Name]
Enter fullscreen mode Exit fullscreen mode

Update the Service Account:

  • Re-apply the updated YAML with kubectl apply -f my-service-account.yaml. Your Kubernetes service account is now linked to the IAM role, granting your pods permission to interact with SQS.

Step 3: Deploying Your Pods

Alright, we're at the finish line for setup: deploying your Kubernetes pods. We'll create a deployment file, tie it to our service account, and then launch the whole shebang. After this, your pods should be up and running, ready to interact with SQS.

Create a Kubernetes Deployment File

First up, let's whip up a Kubernetes deployment file. This is like the recipe that tells Kubernetes how to cook up your pods.

Open Text Editor:

  • Pop open your favorite text editor and create a new file called my-pod-deployment.yaml.

Add YAML Content:

  • Put in the basic structure for a Kubernetes Deployment, and specify the service account you created. Here's a sample:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-sqs-pod
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-sqs-app
    spec:
      serviceAccountName: my-sqs-service-account  # The service account you created
      containers:
      - name: my-container
        image: my-image
Enter fullscreen mode Exit fullscreen mode

Include the Service Account

Service Account Field:

  • Make sure you have the serviceAccountName field set to the name of your service account. (This is shown in the sample YAML above).

Deploy It

Save File:

  • Save the YAML file once you're happy with it.

Run kubectl:

  • Open your terminal and run kubectl apply -f my-pod-deployment.yaml to kick off the deployment.

Note: Linking a Kubernetes Service Account to an AWS IAM role is key for a couple of reasons:

  • Security: It allows your Kubernetes pods to securely access AWS services like SQS without storing AWS credentials in your cluster.

  • Ease of Management: When you update the IAM role, the changes get applied automatically to all pods using the linked service account.

  • Scoped Access: You can fine-tune what resources the pods can interact with in AWS, right down to specific SQS queues or S3 buckets.

  • Audit and Monitoring: Using IAM roles makes it easier to track which services are accessing what resources, aiding in debugging and monitoring.

Step 4: Verify Access

Let's confirm that everything's working as it should:

Test Read/Write to SQS

Exec into Pod:

  • First, get into one of your running pods with kubectl exec -it [Your-Pod-Name] -- /bin/sh.

Install AWS CLI:

  • If it's not already there, install the AWS CLI tool within the pod so you can interact with SQS.
apt update && apt install -y awscli
Enter fullscreen mode Exit fullscreen mode

Configure AWS:

  • Run aws configure and enter your AWS credentials and default region.

Test Write:

  • Try sending a message to your SQS queue.
aws sqs send-message --queue-url [Your-Queue-URL] --message-body "Hello, SQS!"
Enter fullscreen mode Exit fullscreen mode

Check Message:

  • Make sure the message was sent by peeking into your SQS queue in the AWS Console.

Test Read:

  • Now, let's try reading that message back.
aws sqs receive-message --queue-url [Your-Queue-URL]
Enter fullscreen mode Exit fullscreen mode

Verify Output:

  • You should see your message in the output, confirming that read/write access is working. And that's how you check if your pods can read from and write to SQS. If all steps work, you're good to go!

Step 5: Troubleshooting

Now that we've set everything up, let's talk about what could go wrong. Here's your quick guide to common issues you might face and how to fix them.

Common Errors You Might Run Into

Even the best-laid plans can hit some snags. Here's a rundown of common errors you might stumble upon.

Pods Not Starting:

  • If your pods are stuck in a "Pending" state, it might be a resource issue.

IAM Role Errors:

  • Errors like "Unable to assume role" point to an IAM setup mistake.

SQS Permission Errors:

  • If you see errors related to permissions when trying to read/write to SQS, it's likely a policy issue.

Network Issues:

  • Timeouts or connectivity errors could be due to network policies or VPC settings.

How to Fix Them

Got an error? Don't sweat it. Here's how to troubleshoot and get back on track.

Resource Issues:

  • Check your cluster resources and either scale your cluster or reduce the pod requirements.

IAM Mistakes:

  • Revisit your IAM role and make sure it's correctly attached to your service account and pods.

Policy Fixes:

  • Double-check the policies attached to your IAM role. Make sure they grant access to your SQS queue.

Network Troubleshoot:

  • Look into your VPC and network policy settings in both AWS and Kubernetes. Make adjustments as needed.

Conclusion

And there you have it—your Kubernetes pods and SQS are now on speaking terms.

Top comments (0)