DEV Community

Rajan Panchal
Rajan Panchal

Posted on

How to give access to AWS resources without creating 100s of IAM Users?

This post was originally published at https://www.rajanpanchal.net/aws-sts-example/

Scenario:

Imagine you are a solution architect in a company with 100s of Sales employees and you are migrating from on premise to AWS Cloud. You want to use existing employee authentication system and you want to store files on S3 that employee uses in their day to day work. You don't want to keep S3 bucket public, that will expose all files to everybody.
You have 2 options:

  1. Create single role with S3 bucket access and login credentials for all of the employees. With this user have to use 2 different logins. One to access their existing system and other to access S3 files.

  2. Use AWS Security Token Service (STS) to assume role with S3 access and use that to give access to the files. User will still authenticate with their existing system.

In this post, we will explore and implement option # 2. Please note that we are building this example on top of previous post.

About Security Token Service (STS)

AWS Security Token Service (AWS STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate. You can use AssumeRole action on STS that returns a set of temporary security credentials that you can use to access AWS resources that you might not normally have access to. These temporary credentials consist of an access key ID, a secret access key, and a security token. Typically, you use AssumeRole within your account or for cross-account access. In our case, we are using AssumeRole within same account.

How to setup users and roles?

In our case, we are going to create a role called S3ReadOnlyAccessAssumeRole. As name suggests it has only S3 read access policy. We will also create a Trust Policy and attached to this S3 role. Trust policy will allow this role to be assumed by our lambda execution role. Here is how our SAM will look like for this role.

IamS3Role:
    Type: AWS::IAM::Role
    Properties: 
      AssumeRolePolicyDocument:
       Version: 2012-10-17
       Statement:
          - Effect: Allow
            Principal:
              AWS: !GetAtt ShowFilesFunctionRole.Arn
            Action:
              - 'sts:AssumeRole'
      Description: 'Readonly S3 role for lamdba to Assume at runtime'
      ManagedPolicyArns: 
      - arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
      RoleName: S3ReadOnlyAccessAssumeRole

In above snippet, AssumeRolePolicyDocument attribute specifies trust policy that allow Lamdba execution role identified by principle AWS: !GetAtt ShowFilesFunctionRole.Arn to AssumeRole to which this policy is attached to i.e S3ReadOnlyAccessAssumeRole . The ManagedPolicyArns attribute species the policy for S3ReadOnlyAccessAssumeRole that allows read only access to S3 buckets.

Lambda Handler

Now, lets write our Lamdba handler that will use this role. Here is the SAM confgiuration

Origin:
    Type: String
    Default: https://stsexamplebucket.s3.us-east-2.amazonaws.com
  FilesBucket:
    Type: String
    Default: s3-sales-rep
ApiGatewayShowFilesApi:
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod
    Auth:
     UsagePlan:
      CreateUsagePlan: PER_API
      Description: Usage plan for this API
      Quota:
       Limit: 500
       Period: MONTH
      Throttle:
       BurstLimit: 100
       RateLimit: 50
  ShowFilesFunction:
    Type: AWS::Serverless::Function
    Properties:
      Environment:
        Variables:
          userTable: !Ref myDynamoDBTable
          s3role: !GetAtt IamS3Role.Arn
          origin: !Sub ${Origin}
          filesBucket: !Sub ${FilesBucket}
      CodeUri: Lambda/
      Handler: showfiles.lambda_handler
      Runtime: python3.8
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref myDynamoDBTable      
      Events:
        getCounter:
          Type: Api
          Properties:
            Path: /showFiles
            Method: GET
            RestApiId: !Ref ApiGatewayShowFilesApi

We are defining here couple of parameters that will be set as environment variables for Lambda. With Origin we specifying the origin domain for CORS. It is our S3 bucket's Virtual hosted style URL. FilesBucket is the bucket where files are stored. In Serverless Function definition, it uses showfiles.py as lambda handler. Function has permissions to use DB. We also create API for this lamdba with path /showFiles.

Lets see what we do in the Lambda handlers. We modified login.py lambda handler from previous post. We are setting a cookie once the user is authenticated. This is completely optional and really not required for STS to work but you might need some kind of system to identify that user is already authenticated.

 if decryptedPass == pwd : 
      token = secrets.token_hex(16)  
      response = table.update_item(
        Key={
            'userid': uname
        },
        AttributeUpdates={
            'token': {
                'Value': token,
            }
            }
        )

      return {
        'statusCode': 200,
        'headers':{
             'Set-Cookie':'tkn='+uname+'&'+token+';Secure;SameSite=None;HttpOnly;Domain=.amazonaws.com;Path=/',
             'Content-Type': 'text/html'
         },
        'body': '<html><head><script>window.location.href = \''+ os.environ['showFilesUrl']+'\' </script></head><body>Hello</body></html>'
      }
    else:
     response['status'] = 'Login Failed'
     return {
        'statusCode': 200,
        'body': json.dumps(response) 
      }

When user submits the username and password, Login lambda handler will authenticate the user, store the unique id in DB, set the cookie in response with location to showFiles url html page from S3 bucket. On page load, browser will change the location to showfiles url that will trigger the showFiles Lambda handler defined in showFiles.py

showFiles.html

<html>
<head>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/2.4.1/semantic.min.css" integrity="sha512-8bHTC73gkZ7rZ7vpqUQThUDhqcNFyYi2xgDgPDHc+GXVGHXq+xPjynxIopALmOPqzo9JZj0k6OqqewdGO3EsrQ==" crossorigin="anonymous" />
<script
  src="https://code.jquery.com/jquery-3.1.1.min.js"
  integrity="sha256-hVVnYaiADRTO2PzUGmuLJr8BLUSjGIZsDYGmIJLv2b8="
  crossorigin="anonymous"></script>

<script src="https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/2.4.1/semantic.min.js"></script>
</head>
<body>

<div class="ui raised very text container">
<h1 class="ui header">File Access System</h1>
<i class="folder open icon"></i></i><div class="ui label">Files</div>
<div id="files" >Loading..</div>
</div>
</body>
<script>

fetch("https://9nimlkmz74.execute-api.us-east-2.amazonaws.com/Prod/showFiles/", {
  credentials: 'include'
})
  .then(response => response.text())
  .then((body) => {
    var files="";
    var obj =  JSON.parse(body)
    for (i = 0; i < obj.length; i++) {
            files =  files+ "<i class='file alternate outline icon'><a href='#'>&nbsp;&nbsp;"+obj[i]+"</a>"
    }
    document.getElementById("files").innerHTML= files
  })
  .catch(function(error) {
    console.log(error); 
  });

</script>
</html>

We call the showFiles API, that gets the list of Files from S3 bucket and display on the page.

showFiles.py

import json
import logging
import boto3
import os

log = logging.getLogger()
log.setLevel(logging.INFO)

#retuns login cookie information userid and unique token
def getLoginCookie(cookies):
    data ={}
    for x in cookies:
      keyValue = x.split('=')

      if keyValue[0].strip() =='tkn':
        cookieValue = keyValue[1]
        tknvalues = cookieValue.split('&')
        data['uid']=tknvalues[0]
        data['tkn']=tknvalues[1]
      else:
        cookieValue =''
      return data

#verifies unique token that is saved in database vs in request      
def verifyLogin(data):
    dynamodb = boto3.resource('dynamodb')
    table = dynamodb.Table(os.environ['userTable'])
    response = table.get_item(Key={'userid': data['uid']})
    json_str =  json.dumps( response['Item'])

    resp_dict = json.loads(json_str)
    token = resp_dict.get("token")
    return bool(token == data['tkn'])

# Returns list of files from bucket using STS    
def getFilesList():
    sts_client = boto3.client('sts')

    # Call the assume_role method of the STSConnection object and pass the role
    # ARN and a role session name.
    assumed_role_object=sts_client.assume_role(
        RoleArn=os.environ['s3role'],
        RoleSessionName="AssumeRoleSession1"
    )

    # From the response that contains the assumed role, get the temporary 
    # credentials that can be used to make subsequent API calls
    credentials=assumed_role_object['Credentials']

    # Use the temporary credentials that AssumeRole returns to make a 
    # connection to Amazon S3  
    s3_resource=boto3.resource(
        's3',
        aws_access_key_id=credentials['AccessKeyId'],
        aws_secret_access_key=credentials['SecretAccessKey'],
        aws_session_token=credentials['SessionToken'],
    )

    bucket = s3_resource.Bucket(os.environ['filesBucket'])
    files=[]
    for obj in bucket.objects.all():
        files.append(obj.key)
    return files


def lambda_handler(event, context):
    headers = event.get("headers")
    cookies = headers['Cookie'].split(";")
    data = getLoginCookie(cookies)
    isVerified = verifyLogin(data)

    if(isVerified):
        response = getFilesList()

    return {
        'statusCode': 200,
        'headers': {
            'Content-Type': 'application/json',
            'Access-Control-Allow-Origin':os.environ['origin'],
            'Access-Control-Allow-Credentials': 'true'
        },
        'body': json.dumps(response)
    }

Focus on the lamdba_handler function here. We first get the cookie and verify the login. If verified we call the getFilesList function where the magic of STS happens. We get the arn of role to be assumed from Lambda environment variables. The assume_role function returns the credentials that contains the access key id, secret access key and session token. You can use this credentials to get S3 resources and access the bucket. We create list of files as array and send it as response.

You can find full SAM template.yml and other code for this here
https://github.com/rajanpanchal/aws-kms-sts

Before you run SAM deploy, create a S3 bucket that will store the html files. In our case, its stsexamplebucket. We use urls from this bucket in our SAM Template. On SAM Deploy, It will generate output with 3 URLS for the APIs. Modify the html files to point to those url and upload to S3 bucket. Make sure you make those files public.

Testing

You need to create a bucket with name specified in FilesBucket parameter in SAM template. This bucket will store the files for display.

Testing

Summary

In summary, we used a custom identity broker that authenticates the user and then AWS STS that allows access to AWS resources to those users. You might wonder we could have given access to S3 to Lambda Execution role instead of using STS. Of course you can and it will work. But the idea here is to use STS and you can use STS irrespective of Lambda i.e in your other applications like spring boot, java, python application, etc.
In next post, we will further extend this to use S3 signed url to give access to files stored in S3.

Feel free to point out any suggestions, errors and give your feedback in comments!

Top comments (0)