DEV Community

Cover image for How to optimize your lambda functions with AWS Lambda power tuning
Sebastien Napoleon for AWS Community Builders

Posted on

How to optimize your lambda functions with AWS Lambda power tuning

Whether you're a beginner or experienced with AWS, optimizing lambda functions is usually not the starting point for a serverless project. That's normal - at the beginning of a project, the priority is to get something functional, not necessarily optimized (at least, not right away). As a result, you often end up with one or more lambda functions that run, but could run either faster or for less cost!

I have the opportunity to work on a personal project with a 100% serverless architecture. In this project, I have several functions that make API calls in parallel. We're talking about functions that make thousands, if not tens of thousands of calls. At the very beginning, we were close to the time quotas for a lambda in terms of execution time (15 minutes). So, we first added multithreading to manage the API calls (instead of doing them one by one, we now do them in parallel). We were able to cut the time in half, or even by a factor of three for certain functions. And with lambdas, time is money.

Time is money gif

Being satisfied with the improvement, we decided to leave the functions as they were and focus on other features. After a few months, I noticed that we had the same default memory on each of our lambdas (1024 MB), and I asked myself the following question: is it possible to increase or decrease this memory on the "big" lambdas and have a decrease in cost (and even an increase in performance)?

That's when I remembered an tool I had heard about in an episode of the AWS podcast in French: AWS Lambda Power Tuning. This is the utility that I'm going to talk about today, the one that allowed me to save over 30% on my AWS Lambda bill.

AWS Lambda Power Tuning: How does it work?

This tool, which is open source and available here, takes the form of a Step Function that is deployed on your AWS account. The purpose of this Step Function is to run your lambda with different memory configurations several times and output a comparison in the form of a graph (or JSON) to try to find the optimal balance between cost and execution time. There are three possible optimization modes: cost, execution time, or a "balanced" mode where it tries to find a balance between the two.

To deploy it on your account, there are six different ways to do it:

You can find all the tutorials for deploying the application here: https://github.com/alexcasalboni/aws-lambda-power-tuning/blob/master/README-DEPLOY.md

The tool is then very simple to use. You need to go to the AWS Step Functions service, find the step function with a name that starts with "powerTuningStateMachine-", and press the "start execution" button.

Once you have pressed the button, you will be presented with this interface:

Interface for the prompt of the Step Function

The part that interests us is the input part, where we will set up what we want to test. Here is the list of possible inputs for our execution:

  • lambdaARN: ARN of the lambda to execute
  • powerValues: Array representing the memory configurations to test
  • num: The number of lambda invocations to perform for each configuration (minimum 5, recommended between 10 and 100)
  • payload: Input parameters of the Lambda function
  • payloadS3: Input parameters of the Lambda function (if too large for the lambda)
  • parallelInvocation: Allows invoking the lambda in parallel (watch out for throttling if enabled...)
  • strategy: The optimization strategy I mentioned earlier (cost, time or balanced)
  • balancedWeight: Parameter representing what you want to optimize (0 corresponds to the cost strategy and 1 to the time strategy)
  • autoOptimize: Automatically applies the best configuration at the end of the step function execution
  • autoOptimizeAlias: Creates or updates the alias with the new configuration
  • dryRun: Allows testing the operation and call to the lambda function (IAM permission for example)
  • preProcessorARN: ARN of a lambda to execute before each execution of the lambda to be tested
  • postProcessorARN: ARN of a lambda to execute after each execution of the lambda to be tested
  • discardTopBottom: Allows removing the top and bottom 20% (to have a reliable representation of reality => remove cold starts, for example)
  • sleepBetweenRunsMs: Time between each execution of the function to be tested.

There is a lot of information here, but we can focus on lambdaARN, num, and strategy to launch our first invocation. The rest can be explored after the first optimization if you are not yet satisfied.

For example, an input would look like this :



{
  "lambdaARN": "arn:aws:lambda:eu-west-1:xxxxxxxx:function:lambda-power-tuning-snapoleon-article",
  "powerValues": [
    700,
    1000,
    1500,
    2000,
    2500,
    3000
  ],
  "num": 30,
  "payload": {},
  "parallelInvocation": false,
  "strategy": "balanced"
}



Enter fullscreen mode Exit fullscreen mode

If everything goes well, you should have a nice state graph for the step function like this one:

State graph of the Step Function execution

If you click on the last step "Optimizer", you will have access to the output of the step with, among other things, the results of the step function (also available under the Execution input and output panel). Here, in JSON format, you will have the output with the results for each configuration, as well as a link to a website to visualize the results in the form of a graph. Each result for a given configuration will look like this:



{
      "averagePrice": 0.00003504375000000001,
      "averageDuration": 2135.3594444444443,
      "totalCost": 0.0010591546875000002,
      "value": 1000
}


Enter fullscreen mode Exit fullscreen mode

and the graph will look something like this :

Comparison result graph

It is now time to show you the power of the tool with a concrete example.

Optimizing your Lambda function in practice

A good use case for optimization would be for pure computation, using a library like Pandas in Python. So we will try to optimize a Lambda function that uses the Pandas library. The code is very simple:



import json
import pandas as pd
import numpy as np


def lambda_handler(event, context):
    # generate random data
    data = np.random.randn(1500000, 10)
    df = pd.DataFrame(data)

    # apply compute onto data
    df = df.apply(lambda x: x**2)
    df = df.apply(lambda x: x + 10)

    df = df.apply(lambda x: x**2)
    df = df.apply(lambda x: x + 10)

    df = df.apply(lambda x: x**2)
    df = df.apply(lambda x: x + 10)

    # print results
    print(df)
    return {
        'statusCode': 200,
        'body': json.dumps('Please optimize me !')
    }


Enter fullscreen mode Exit fullscreen mode

If you want to perform the test by yourself, you need to deploy the function and don't forget to add the layer containing the required libraries (Pandas and Numpy). Here is the ARN of the layer used here: arn:aws:lambda:eu-west-1:336392948345:layer:AWSSDKPandas-Python39:5.

Once the function is deployed and assuming that you have already deployed AWS Lambda Power Tuning on your account, all you have to do is to set up the execution of the function :



{
  "lambdaARN": "arn:aws:lambda:eu-west-1:xxxxxxx:function:lambda-power-tuning-snapoleon-article",
  "powerValues": [
    700,
    1000,
    1500,
    2000,
    2500,
    3000
  ],
  "num": 30,
  "payload": {},
  "parallelInvocation": false,
  "strategy": "balanced"
}


Enter fullscreen mode Exit fullscreen mode

Here's what I used for our example. Please note that most of the time you will be limited to 3008MB of memory, which is a default "soft limit" for all AWS accounts. You can request an increase (up to 10,000 MB), but you will need to make a request to the support.

Once the parameters are filled in, we launch the execution and all we have to do is wait.
Two minutes later, everything is done and we have the results available :

Concrete example chart results

We notice that if we allocate 700 MB, the execution time is 3097 ms for a cost of $0.000036. By increasing the memory to 1000 MB, we can reduce the execution time to 2135 ms for a similar cost of $0.000035. If we increase to 1500 MB, the execution time drops to 1422 ms and the cost remains the same. Starting from 2000 MB, we reach a plateau in terms of execution time (we won't go below 1149 ms). Since the time won't decrease anymore from this point, the cost will only increase. For 2000 MB, we have a cost of $0.000038, $0.000049 for 2500 MB and $0.000059 for 3000 MB.
We can conclude from this execution that the sweet spot seems to be around 1500 or 2000 MB depending on what you're looking to optimize. If you want to save money, 1500 MB seems like a good candidate. On the other hand, if you're looking to optimize execution time, 2000 MB seems like the best choice.

To conclude

This tool is essential if you want to optimize the cost or execution time of your Lambda functions. It will allow you to make considerable savings on your AWS bill if you frequently use the Lambda service. It is also important to note that this tool can be integrated into a CI/CD and executed with each deployment of your application. This way, you can continuously update your Lambda functions through DevOps processes (but beware of drifts in the IaC).

Top comments (4)

Collapse
 
syedashrafulla profile image
Syed Ashrafulla

Interesting research! How close are you getting from AWS Lambda with power tuning to going to Fargate to squeeze even more juice? I feel like AWS Lambdas are great until the small set of resource sizes doesn't fit your growing application.

Collapse
 
sebnapo profile image
Sebastien Napoleon

We are actually looking to migrate part of those scripts to an actual EC2. It seem's that it will benefit for us, because those script are launched from an eventbridge schedule event and EC2 (with ECS) will permit us to save money.

Collapse
 
alexcasalboni profile image
Alex Casalboni

Thanks for sharing, Sebastien :)

Collapse
 
awsmantra profile image
Rakesh Sanghvi

This might be interesting for you.. check it out my post..

dev.to/awsmantra/100-millions-lamb...