DEV Community

Cover image for How to Schedule Any Task with AWS Lambda
Renato Byrro
Renato Byrro

Posted on

How to Schedule Any Task with AWS Lambda

Did you know it is possible to schedule the execution of virtually any task to run in the cloud with AWS Lambda?

I have been asked how to do it a few times by people using Dashbird, so I thought about summarizing it on an article for public reference.

An event-driven approach is perhaps the most powerful aspect of AWS Lambda. In short, Lambda can automatically respond to many kinds of events inside AWS or externally.

This behavior can be leveraged to act as a task scheduler, which is what we’ll cover in this article.

DynamoDB TTL

DynamoDB is like Lambda for data storage: serverless, highly scalable, fully managed, No-SQL database.

It has a cool feature called TTL, which stands for "time to live". TTL allows us to assign a timestamp for the deletion of any entry in the DB. When that time comes, a background job in DynamoDB will automatically delete the entry for us.

Another interesting feature is the table streams: DynamoDB will track every change made to items (DB entries) in a particular table and generate a stream of objects describing those changes. These streams can be consumed by a variety of sources, including... you guessed it, Lambda!

Alt Text

These two features can be combined to transform DynamoDB into a task scheduler. Here's how this would work:

First, create an item with a TTL

import boto3

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('MyTable')

table.put_item(
   Item={
        'customerID': 'ABC123',
        'task': {
            'action': 'alert_expiration',
            'target': 'subscription',
            'args': {
                'subscriptionID': 'XYZ123'
            }
        },
        'ExpirationTime': 1609416000
    }
)
Enter fullscreen mode Exit fullscreen mode

In the example above, we are setting a schedule for December 31, 2019, alerting a customer about a subscription that is close to expiration.

Note the ExpirationTime attribute we set for the item. DynamoDB will constantly compare the ExpirationTime of items in our table against the current timestamp. When the item is overdue, Dynamo will delete it.

When the item gets deleted, DynamoDB streams will trigger our Lambda function. We can optionally request Dynamo to provide the contents of the deleted item with the streams so that Lambda knows what to do.

In the Lambda code, we need to implement our logic for processing the tasks. In the present example, it could be sending an email to the customer using a pre-determined template.

Important: the frequency with which Dynamo will scan your items and delete expired ones may vary up to 48 hours. This is relevant because, if your use case requires precise time resolution, the implementation above is not recommended.

S3 Object Expiration

S3 is like Lambda for object storage. It can reliably store from text files to images, videos, etc, organized in buckets, and scale seamlessly according to demand.

Bucket

Using S3 as a task scheduler for AWS Lambda is very similar to DynamoDB streams. We can set an expiration date for an object. S3 will scan our objects regularly and delete the expired ones.

Events in S3, such as object deletion, can also trigger a Lambda function, similarly to DynamoDB.

We can store a JSON file on S3, containing instructions for our Lambda function to process it when it comes the time.

Here's how it works:

import json
from datetime import datetime
import boto3

s3 = boto3.client('s3')

data = {
    'customerID': 'ABC123',
    'task': {
        'action': 'alert_expiration',
        'target': 'subscription',
        'args': {
            'subscriptionID': 'XYZ123'
        },
    }
}

s3.put_object(
    Body=json.dumps(data),
    Bucket='my-task-scheduler',
    Key='test-scheduler.json',
    Expires=datetime(2019, 12, 31)
)
Enter fullscreen mode Exit fullscreen mode

Again, note the Expires attribute, that sets when the object is due for deletion.

When this object gets actually deleted, S3 will invoke our function providing the bucket name and object key. With this information, our Lambda can retrieve the object contents from S3, read the task instructions and perform the logic we want.

Important: as in DynamoDB, AWS does not guarantee an object will get deleted right after expiration, making this implementation unsuitable for use cases that require a precise time resolution.

CloudWatch Rule Schedule

Watch

CloudWatch is perhaps the most obvious choice for Lambda task scheduling, and could not be more straightforward: choose a "Schedule" as the event source and a Lambda function as the "Target":

There are mainly two disadvantages, though:

1) Only supports recurrent schedules

We can only choose a fixed-rate time, e.g. 'every 2 hours', or 'every 5 minutes', or a CRON-like expression.

It is not possible to schedule a specific date for the event to happen, for example, as we did with DynamoDB and S3.

2) No customized events

Each and every invocation of the Lambda by a CloudWatch Rule Schedule will provide the exact same request payload. We cannot customize how Lambda will run based only on what the scheduler provides, like we did with DynamoDB and S3.

CloudWatch Rule

Wrapping up

Wrap Up

We have explored three ways of using an event-driven approach for scheduling tasks for AWS Lambda to process, each having its pros and cons.

There is one challenge of using this sort of architecture: keep control and visibility over what is going on in your system. Autonomous invocations can be difficult to track, especially when the application start scaling up.

In case you would like to stay on top of your stack while still reaping the benefits of serverless event-driven architecture, I would recommend to check out Dashbird, a monitoring service specially designed for your use case.

Disclosure: I work as a developer advocate for Dashbird.

Top comments (7)

Collapse
 
jdrydn profile image
James

We cannot customize how Lambda will run based only on what the scheduler provides, like we did with DynamoDB and S3.

Using the "Constant JSON text" as the screenshot shows, you can configure the input for the function. so you could have multiple for schedules for the same function with different inputs:

rate(1 minute) → {"rate":"EVERY-MINUTE"}
cron(*/10 * * * ? *) → {"rate":"EVERY-TEN-MINUTES"}
Collapse
 
byrro profile image
Renato Byrro

Hi James, thanks for the comment!

I'm replying a bit late, but what I wanted to mean was that DynamoDB and S3 can provide more information to be processed. In the case of Dynamo, we can receive the entire object with the invocation, as well as its state prior to the change.

With CloudWatch Rules, it's just a rule. Unless it's a self-contained task, Lambda will probably need to go get more data somewhere else to actually process what it's supposed to.

Collapse
 
ale_annini profile image
Alessandro Annini

Hi Renato,
thanks for your interesting article.
What if my task needs to be executed at (almost) exactly the time specified? Do you have any advice?

Thanks

Collapse
 
byrro profile image
Renato Byrro

Hi Alessandro, glad you liked the post. Thanks for the comment, that is a very interesting question.

You'll need to implement custom code. I can think of a few ideas, but need more investigation to come up with a proper architecture. Is it possible to detail a little more your use case?

One naive idea would be:

  1. Set up a Lambda to run every minute, triggered by a CloudWatch Rule.
  2. Store tasks in a DynamoDB table, indicating the precise time to execute.
  3. Lambda will query this table and get all tasks scheduled for start=current_timestamp + 30 seconds & end=current_timestamp + 90 seconds (the 30 sec start is an offset to account for Lambda startup time - this needs to be adjusted according to a number of factors).
  4. Implement one or more additional Lambdas to process each type of task.
  5. The first Lambda will invoke these executor Lambdas passing the task.
  6. Each Executor Lambda code could implement a "while" loop to check whether current_timestamp == task_execution_timestamp. When evaluates to true, it executes the task.

I said it's a naive idea because it ignores some important things:

What does "exactly the time specified" mean to you?

Is it enough to run the task on a given second? Or do you need time resolution down to the millisecond, maybe microsecond?

That will have an impact over the implementation. Some programming languages will resolve time down to milliseconds, only.

How much deviation can you accept to meet the "almost" requirement?

If you're using AWS Lambda, beware that you can't control which machine is running your code. Could be multiple machines throughout a given period of time. It's actually most likely to be a different machine for every cold start.

This has important implications since there are issues with syncing clocks on distributed systems.

Depending on how much deviation you can accept in the "almost the exact time", this can be a problem.

Scalability

How many tasks do you expect to schedule and how are they distributed over time?

Is it possible that you'll have 50,000 tasks to run on a given millisecond? If yes, the challenge will be setting an infra that can scale to that level of concurrent requests.

Reliability (in general, not only infra-wise)

What happens if the triggering process of a task fails, or if the task executor fails entirely and a block of tasks is not executed at all.

Do you need a system in place to check for that and retry the task or can you afford having some tasks being lost?

Will it be too late if a few seconds have passed before retrying?

Is it a problem if, occasionally, the same task gets executed twice? If yes, a proper locking mechanism needs to be in place to ensure each task is processed once and only once.

Collapse
 
byrro profile image
Renato Byrro

By the way, make sure you research libraries that could help you with the code implementation. For example, Python has the celery project, which can help you with scheduling tasks with precise timing.

Collapse
 
chrisarmstrong profile image
Chris Armstrong

Another way to schedule tasks (albeit within a 15min time-frame) is to post a message to an SQS queue with a DelaySeconds parameter. This can be useful for implementing a polling/reccurrent task started on demand that terminates itself.

Collapse
 
byrro profile image
Renato Byrro

That is a clever idea, thanks for sharing Chris! ;)