Event Handling in AWS using SNS, SQS, and Lambda

Frank Rosner on June 29, 2018

This blog post is part of my AWS series: Infrastructure as Code - Managing AWS With Terraform Deploying an HTTP API on AWS using Lambda and API G... [Read Full]
markdown guide

From reading AWS documentation understand you can trigger a Lambda from a SNS. If the Lambda fails AWS will retry it twice some time later. You can configure the SNS/Lambda to send the failed messages (after the 3 attempts) to a dead-letter queue. Why/When should I put a SQS between the SNS and the Lambda?


When putting SQS between SNS and Lambda you don't need a dead letter queue (DLQ). In my opinion it depends on what you are doing with events in the DLQ. If those events are treated differently then you can use a DLQ. But if you are feeding them back into the input queue because you want to keep trying to process them until they succeed, just put SQS in front in the first place.

Does that make sense?


Hi Frank,

my thinking is that if a message failed to be processed 3 times (with delays between retries), must be something wrong with the message and needs to be verified kind of manually or will be going around the SQS forever. Another case can be that an external component (3rd party service, database...) is down but then do we want to immediately put the message back in the SQS and try to reprocess again? Better to move to the DLQ and wait little bit longer.

Thanks for your input.

The message processing might fail because there is something wrong with the message, yes. It might also fail however because a downstream system is not available and the Lambda fails because of this. Then it's a useful retry mechanism.

In the end it's an architecture decision and there are no right solutions, only trade-offs. But you're asking the right questions!


I read decoupling is very important in serverless architectures, only do minimal work in synchronous Lambda calls etc.

And I heard AWS works on SQS Lambda triggers, which will ease a lot of pain in that regard :)


I think putting a queue between S3 and Lambda has yet another benefit.
In case, your lambda failed on processing the event from S3 directly connected to it, the event would be lost after second retry. Putting a queue in-between helps to persist the message for longer time.
An alternative would be to add a DLQ to Lambda itself (docs.aws.amazon.com/lambda/latest/...).
Not sure which one is better.


I totally agree with you. I also find it hard to stay on top of all the different limits and behaviours. How many retries you will get depending on the invocation type and resource... Definitely something to think about in advance.


Hey Frank,

I have a similar use case wherein I need a program in EC2 to subscribe to SNS Topic to get text files/data placed in S3 bucket.This program also needs to parse the text file and create a .CSV record in DynamoDB and also save a copy in S3 (destination folder). can you guide me on that?



I assume you already setup a bucket notification that publishes messages to an SNS topic. If you want to use SNS to receive the notifications (because you want to fan-out) then you can either subscribe via HTTP or SQS.

In case you are using HTTP, the EC2 instance needs to expose an HTTP endpoint. If you want a more resilient and secure setup then you can subscribe with SQS and make the EC2 instance poll from SQS.

In case you don't have multiple subscribers on your SNS topic you may also directly notify SQS from S3 and skip SNS inbetween.

Another option is to replace your EC2 instance with a Lambda function and then either directly notify the Lambda or make the Lambda poll the SQS queue (see dev.to/frosnerd/understanding-the-...).

Without knowing more of your context it is hard to recommend something concrete but if possible I'd go for the following setup if I were you: S3 -> SQS -> Lambda. That's easy to manage and gives a reasonable amount of resilience. Another, simpler option would be S3 -> Lambda but then you might lose events in case your Lambda function is broken as S3 will only retry a couple of times as it invokes the Lambda synchronously.

I hope that helps!



Thanks a lot for the details Frank. to give you more context, below digram might help with the requirement:


Great insight Frank. Thanks.

I need to provide a solution through Lambda to notifies an SNS topic when a file is received outside of a specific time in S3. Files are received in S3 at a specific time (4 am - 5 am EST). Do you have any custom solution or code I can use in my Lambda function to send out an SNS notification when the files are received after 5 am?




I see two options:

1) Configure an S3 event notification towards a Lambda function and in that Lambda check the time of the PUT event. If the time is not compliant, make the Lambda publish to your SNS topic.

2) Configure a scheduled event in CloudWatch that periodically invokes your Lambda (e.g. every hour or every day) and then make the Lambda check for any files that have been uploaded at a non-compliant time.

Option 1 is more real-time but might generate a lot of noise and cost in the case of many files. Option 2 has some lag but you can use it to build a summary about all the files that have been uploaded quite easily.

What do you think?


Exactly what I need. I'm going to use Python instead of Java, and Microsoft Teams instead of Slack.

BTW: you may use lambda layers to maintain your dependencies and code separately


AWS documentation is missing the aws_lambda_permission, I noticed it and that actually brought me here. Thanks for confirming it. Awesome post overall, thanks a lot!!


Frank, thanks for the very nice and useful post:-)

code of conduct - report abuse