If you've worked with databases, you likely have given some thought to where to store your DB credentials. Should they live on the server along with your application? Should they be in environment variables? Should they be in a vault, and you query them with every request?
In this article, we will create a simple application that uses RDS MySQL as its storage and use AWS Lambda to run our code. The app will retrieve a list of users from the database. We will use a single table and a single Lambda function.
We will go through multiple iterations covering credentials management in different ways, starting from incorrect practices towards correctly addressing secrets in a production system.
As part of this process, let's build our infrastructure using Terraform because it's fun.
The resources provisioned throughout this post might incur some cost
Please do not use this code without understanding what it does. The database is public in the snippets below when it is essential to have it secured in a VPC in a real-life scenario.
Build the app with credentials as env variables or in plain text
In this first iteration, we will generate credentials to access the DB and use them to connect from Lambda. Of course it's not a good idea to have the credentials stored in open text, but we'll start with it and evolve it to be secure throughout the article.
Create an "infrastructure" folder in your project where we will place the Infrastructure code.
Create the database
First, let us create a MySQL database; in a rds.tf
file:
module "db" {
source = "terraform-aws-modules/rds/aws"
identifier = "demodb"
engine = "mysql"
engine_version = "8.0.30"
instance_class = "db.t3.micro"
allocated_storage = 5
db_name = "demodb"
username = "user"
port = "3306"
family = "mysql8.0"
# DB option group
major_engine_version = "8.0"
publicly_accessible = true
skip_final_snapshot = true
}
output "db_instance_password" {
sensitive = true
value = module.DB.db_instance_password
}
I am using pre-built modules of Terraform for the database resources rather than building them from scratch (https://registry.terraform.io/modules/terraform-aws-modules/rds/aws/latest).
Create your infrastructure by running the following command: terraform apply -var-file=variables.tfvars
. When we create the database using this module, it will generate a random password as an output (that's the password of the master database user). The output bloc will store the generated password in the Terraform state - By running terraform output -json
, you get a JSON object with that password:
{
"db_instance_password": {
"sensitive": true,
"type": "string",
"value": "pgO5Wle0vXrJX2CL"
}
}
That's NOT a good way to display and store a password, as it's visible to anyone who might have access to your terraform state (in our case, it's a local file that got generated when we apply the infrastructure - terraform.tfstate
).
You might have noticed that I have publicly_accessible=true
in my configuration. Don't do this in production. That's my lazy workaround to access the DB from my local machine to add data and to avoid setting up fully fleshed-out network & security patterns. It's also better as you create your table through code. For this example, we'll connect to the instance manually and run the following scripts:
CREATE TABLE users (
id INTEGER,
first_name VARCHAR(40),
last_name VARCHAR(40),
user_type ENUM ('admin', 'finance', 'dev')
);
INSERT INTO users (id, first_name, last_name, user_type)
VALUES
(1, 'Pam', 'Beesly', 'finance'),
(1, 'Dwight', 'Schrute', 'admin'),
(1, 'Michael', 'Scott', 'finance');
Create a Lambda function
Create a new file under "infrastructure" - call it lambda.tf
and update it with the following config:
data "aws_iam_policy_document" "lambda_assume_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
resource "aws_iam_role" "lambda_execution_role" {
name = "lambda-get-users-exec-role"
assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json
}
resource "aws_lambda_function" "get_users_lambda" {
function_name = "get-users-lambda"
handler = "index.handler"
runtime = "nodejs16.x"
filename = "lambda_function.zip"
source_code_hash = filebase64sha256("lambda_function.zip")
role = aws_iam_role.lambda_execution_role.arn
environment {
variables = {
"RDS_HOSTNAME" = "demodb.cvwlm7q0gjdn.ap-southeast-2.rds.amazonaws.com"
"RDS_USERNAME" = "user"
"RDS_PASSWORD" = "pgO5Wle0vXrJX2CL"
"RDS_PORT" = "3306"
"RDS_DATABASE" = "demodb"
}
}
}
Did you notice that we included the credentials as environment variables? That is NOT a good practice, as we now have the password in the Terraform state, the Terraform code, and the Lambda configuration (anyone with access to these systems can see the credentials). We will change this in future iterations.
Create a new src
folder at the same level of infrastructure.
- Run
npm init
inside the folder - Run
npm i mysql
to install the package that enables us to connect and query the database - Add an
index.js
file with the following content:
const MySQL = require("mysql");
const con = MySQL.createConnection({
host: process.env.RDS_HOSTNAME,
user: process.env.RDS_USERNAME,
password: process.env.RDS_PASSWORD,
port: process.env.RDS_PORT,
database: process.env.RDS_DATABASE,
});
exports.handler = async (event) => {
const SQL = "SELECT * FROM users";
return new Promise((resolve, reject) => {
con.query(SQL, function (err, result) {
if (err) throw err;
console.log("========Executing Query=======");
const response = {
statusCode: 200,
body: JSON.stringify(result),
};
resolve(response);
});
});
};
After running terraform apply
again, you should see your new Lambda function provisioned from your AWS Console (check the configuration).
Open the created lambda function from the AWS console and click the "Test" button. Notice the duration taken to execute the function:
Duration: 237.95 ms Billed Duration: 238 ms Memory Size: 128 MB Max Memory Used: 70 MB Init Duration: 253.66 ms
We will use these values to benchmark against other solutions to better understand the tradeoffs.
Build the app with credentials stored in an SSM Parameter Store
It doesn't require much thought to identify the risk involved in keeping secrets as plain text, even if your code is private and your AWS account is not accessible to everyone. It's easy to make a mistake or keep a door open that would be an open invitation to hackers.
Alright, we generated the password during the infrastructure provisioning, and we retrieved this password by running terraform output -json
. Why don't we store it in SSM Parameter Store and retrieve it as needed - this way, we don't need to put it in code, and no one will see it as an environment variable.
We have a few changes to make:
- Create a new
SSM
file under the "infrastructure folder:
resource "aws_ssm_parameter" "rds_password" {
name = "RDS_PASSWORD"
type = "SecureString"
value = "placeholder"
}
- Run
terraform apply
This will create a secure parameter in the SSM Parameter store. In real life, it might be a good idea to use all of the environment variables in the Parameter Store - You could place them under a path and query all of them with a single request from within your Lambda. For now, I will only store the password.
In AWS Console, navigate to Systems Manager -> Parameter Store, and fill in your password instead of the "placeholder" value we left in ssm.tf
.
Let's remove this value from the Lambda environment variables in lambda.tf
. Then let's update our Lambda code to retrieve the value from SSM.
- Run
npm install @aws-sdk/client-ssm
- Update the
index.js
file as follows:
import { SSMClient, GetParameterCommand } from "@aws-sdk/client-ssm";
import mysql from "mysql";
const ssmClient = new SSMClient();
const input = {
Name: "RDS_PASSWORD",
WithDecryption: true,
};
const command = new GetParameterCommand(input);
const rdsPassword = await ssmClient.send(command);
const con = mysql.createConnection({
host: process.env.RDS_HOSTNAME,
user: process.env.RDS_USERNAME,
password: rdsPassword.Parameter.Value,
port: process.env.RDS_PORT,
database: process.env.RDS_DATABASE,
});
export const handler = async (event) => {
const sql = "SELECT * FROM users";
return new Promise((resolve, reject) => {
con.query(sql, function (err, result) {
if (err) throw err;
console.log("========Executing Query=======");
const response = {
statusCode: 200,
body: JSON.stringify(result),
};
resolve(response);
});
});
};
The duration taken to execute the function is:
Duration: 104.29 ms Billed Duration: 105 ms Memory Size: 128 MB Max Memory Used: 93 MB Init Duration: 779.47 ms
Notice how the "init" duration is now 779.47 ms
compared to when we had the password in the environment variable (253.66 ms
). That's because we have more code running in the function init to connect to SSM and get the password.
That's a bit slower, but it's way more secure than the first instance.
Build the app with credentials stored in AWS Secrets Manager
AWS Secrets Manager shares a few functionalities with AWS Parameter Store. The difference is that Secrets Manager supports secrets rotation and integrates with RDS - That's a very significant security improvement to what we had earlier.
First, let us create the Secrets Manager resource. Create a new file secrets_manager.tf
under the "infrastructure" folder:
resource "aws_secretsmanager_secret" "rds_secret" {
name = "secret-manager-rds"
description = "secret for RDS"
}
resource "aws_secretsmanager_secret_version" "rds_credentials" {
secret_id = aws_secretsmanager_secret.rds_secret.id
secret_string = "placeholder"
}
I avoid manually opening the AWS console and pasting the secrets in real life. It's better to have the secret automatically generated and passed in Terraform without any manual intervention. Here's an example: https://stackoverflow.com/a/68200795/1263668 - for now, we'll use the console for convenience.
After running terraform apply
, you can see the newly created secret in your AWS console -> Secrets Manager.
- Click "Retrieve secret value"
- Click the "Edit" button and enter the value of your password instead of the placeholder
Next, let us delete the SSM Parameter Store parameters we created (Delete ssm.tf
) and update the Lambda to use Secrets Manager instead.
- Update the lambda execution role to permit it to interact with Secrets Manager. In the
iam.tf
file:
resource "aws_iam_policy" "lambda_policies" {
name = "lambda_secrets_manager_access"
description = "lambda access to secrets manager"
policy = data.aws_iam_policy_document.lambda_policies_document.json
}
data "aws_iam_policy_document" "lambda_policies_document" {
statement {
actions = [
"secretsmanager:GetSecretValue"
]
resources = [aws_secretsmanager_secret.rds_secret.arn]
}
}
- In the
lambda.tf
file, add the following environment variable:
"SECRET_MANAGER_RESOURCE_ID" : aws_secretsmanager_secret.rds_secret.id
- Update the lambda code in
index.js
- Update the code before the handler function to the following:
import {
SecretsManagerClient,
GetSecretValueCommand,
} from "@aws-sdk/client-secrets-manager";
import MySQL from "mysql";
const client = new SecretsManagerClient();
const input = {
SecretId: process.env.SECRET_MANAGER_RESOURCE_ID,
};
const command = new GetSecretValueCommand(input);
const rdsPassword = await client.send(command);
const con = MySQL.createConnection({
host: process.env.RDS_HOSTNAME,
user: process.env.RDS_USERNAME,
password: rdsPassword.SecretString,
port: process.env.RDS_PORT,
database: process.env.RDS_DATABASE,
});
Test the Lambda function.
The duration taken to execute the function is (the init duration is slightly faster than with SSM parameter store but still slower than having the secret passed as an env variable):
Duration: 109.62 ms Billed Duration: 110 ms Memory Size: 128 MB Max Memory Used: 86 MB Init Duration: 597.35 ms
Rotate the secrets in AWS Secrets Manager
So far, there isn't much difference between "SSM Parameter Store" and "Secrets Manager". However, AWS Secrets Manager offers a feature to rotate secrets. How awesome is that! - AWS Secrets Manager uses a Lambda function to rotate the secret and sync it to your database in RDS.
Unfortunately, it's more complicated than selecting a checkbox to enable the secret rotation. There is some configuration to change, including creating the rotator lambda.
- Create a new Lambda function that will handle the rotation - here is a template https://github.com/aws-samples/aws-secrets-manager-rotation-lambdas/blob/master/SecretsManagerRDSMySQLRotationMultiUser/lambda_function.py - I created it in a separate folder within the same repository. If you are like me, new to using Python, you could use this helpful guide:https://docs.aws.amazon.com/lambda/latest/dg/python-package.html.
- Add the following to the
lambda.tf
file:
resource "aws_lambda_function" "secret_rotator" {
function_name = "secret-rotator-lambda"
handler = "rotate.lambda_handler"
runtime = "nodejs16.x"
filename = "lambda_rotator_function.zip"
source_code_hash = filebase64sha256("lambda_rotator_function.zip")
role = aws_iam_role.lambda_rotator_execution_role.arn
depends_on = [
aws_iam_role_policy_attachment.lambda_rotator_policies_attachment
]
}
- In the
iam.tf
file add the following - we need to give the rotator lambda access to rotate the secret and permit the Secrets Manager to invoke the Lambda
data "aws_iam_policy_document" "secrets_access_policy_document" {
statement {
actions = [
"secretsmanager:DescribeSecret",
"secretsmanager:GetSecretValue",
"secretsmanager:PutSecretValue",
"secretsmanager:UpdateSecretVersionStage",
]
resources = [aws_secretsmanager_secret.rds_secret.arn]
}
statement {
actions = [
"secretsmanager:GetRandomPassword"
]
resources = ["*"]
}
statement {
actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
resources = [
"arn:aws:logs:*:*:*",
]
}
}
resource "aws_iam_policy" "secrets_access_policy" {
name = "secrets-access-policy"
policy = data.aws_iam_policy_document.secrets_access_policy_document.json
}
resource "aws_iam_role_policy_attachment" "lambda_rotator_policies_attachment" {
role = aws_iam_role.lambda_rotator_execution_role.name
policy_arn = aws_iam_policy.secrets_access_policy.arn
}
resource "aws_lambda_permission" "allow_secrets_manager" {
statement_id = "AllowExecutionFromSecretsManager"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.secret_rotator.function_name
principal = "secretsmanager.amazonaws.com"
source_arn = aws_secretsmanager_secret.rds_secret.arn
}
- Update the "Get Users" lambda to read all the info from the secrets manager rather than the env variables. In the
index.js
file, change the code just before the handler function to:
const response = await client.send(command);
const secret = JSON.parse(response.SecretString);
const con = mysql.createConnection({
host: secret.host,
user: secret.username,
password: secret.password,
port: secret.port,
database: secret.dbname,
});
- Remove the above variables from the
lambda.tf
file - Apply your changes (repackage your Lambda, then run
terraform apply
) Navigate to the AWS Console, and notice how the
Rotation Configuration
section is updated:
Upon the first deployment, the secret should rotate, and you can see this in the CloudWatch logs - You can also see the new secret in Secrets Manager
That's way more secure as we can rotate the secret often; still, we have a master password that we use, and we need to keep it away from casual access.
Connect to the database without Secrets
Another way to connect to the RDS database is to use IAM user or role credentials and an authentication token instead of a username/password.
Before getting into the implementation, the max number of connections per second cannot exceed 200 authentication connections imposed by IAM. Look at the limitations and recommendations in the AWS Docs.
Now let's get to the implementation:
First, we need to activate IAM Database Authentication. We will do this in Terraform. in the rds.tf
file, add the following attribute: iam_database_authentication_enabled = true
.
Second, we will create a database that uses AWS Auth token: CREATE USER lambda_user IDENTIFIED WITH AWSAuthenticationPlugin as 'RDS';
Next, we need to create an IAM policy that allows connecting to the DB as the created user and attach it to the execution role of the Lambda:
data "aws_iam_policy_document" "lambda_rds_connect_policy_document" {
statement {
actions = [
"rds-db:connect"
]
resources = ["arn:aws:rds-db:${var.region}:${data.aws_caller_identity.current.account_id}:dbuser:${module.db.db_instance_resource_id}/lambda_user"]
}
}
resource "aws_iam_policy" "lambda_rds_connect_policy" {
name = "lambda_connect_to_rds"
description = "lambda access to connect to rds"
policy = data.aws_iam_policy_document.lambda_rds_connect_policy_document.json
}
resource "aws_iam_role_policy_attachment" "lambda_policies_attachment_db_connect_policy" {
role = aws_iam_role.lambda_execution_role.name
policy_arn = aws_iam_policy.lambda_rds_connect_policy.arn
}
Note here that we are using db_instance_resource_id
here - I mistakenly used db_instance_id
first, which cost me a couple of hours of investigation.
The next step would be to modify the Lambda code. Update the index.js
file before the handler function, as follows:
import {
SecretsManagerClient,
GetSecretValueCommand,
} from "@aws-sdk/client-secrets-manager";
import mysql from "mysql2";
import { Signer } from "@aws-sdk/rds-signer";
const client = new SecretsManagerClient();
const input = {
SecretId: process.env.SECRET_MANAGER_RESOURCE_ID,
};
const command = new GetSecretValueCommand(input);
const response = await client.send(command);
const secret = JSON.parse(response.SecretString);
const signer = new Signer({
hostname: secret.host,
port: secret.port,
username: "lambda_user",
});
const token = await signer.getAuthToken();
const con = mysql.createConnection({
host: secret.host,
user: "lambda_user",
password: token,
port: secret.port,
database: secret.dbname,
ssl: "Amazon RDS",
});
Note that we changed the MySQL client we are using in our Lambda from mysql
to mysql2
, as mysql
does not support this authentication method. So make sure to run npm i mysql2
.
Using the mysql
package would throw this error: "ER_NOT_SUPPORTED_AUTH_MODE: Client does not support authentication protocol requested by server; consider upgrading MySQL client"
.
I hope this article was helpful, and I would love to hear your thoughts. In summary:
- Don't use env variables or config to store your credentials
- Using Parameter Store is a good first option if you would like to secure your credentials
- The best option is using Secrets Manager with password rotation, in my opinion, especially if you're using RDS
- IAM Authentication is the most secure, in my opinion, but has a few limitations. So good to consider them before using it in a production environment.
Thanks for reading this far. Did you like this article, and do you think others might find it useful? Feel free to share it on Twitter or LinkedIn.
Top comments (0)