In this Article
- Overview of Event Driven Architecture
- Event Driven Architecture Common Model
- AWS messaging services (use case, model, throughput, pricing)
- SQS
- SNS
- Combined Use Case: SNS and SQS Integration
- Eventbridge
- Kinesis Data stream
- Kafka Overview
- Very Important to know when to choose which service
Overview of Event Driven Architecture ✨
Event-Driven Architecture (EDA) is a design paradigm where systems communicate and respond to events in real-time.
This architecture promotes loose coupling, scalability, and flexibility, as components are only connected through the events they produce and consume.
Event-Driven Architecture is widely used in systems requiring high responsiveness and real-time processing, such as financial trading platforms, IoT networks, and customer service applications.
In Event-Driven Architecture (EDA), the main components are Producer, Event Broker and Consumer :
- Producer: This is the source that generates and emits events. Producers can be anything from applications, services, or devices that detect a change in state or trigger an event.
- Event Broker: This intermediary handles the transmission of events from producers to consumers. It ensures decoupling by managing the distribution and routing of events, often providing features like event filtering, persistence, and scalability.
- Consumer: Listens for and processes events received from the event broker. Consumers act on the event data, performing tasks such as updating systems, triggering workflows, or generating responses.
Event Driven Architecture Common Model
There are many model in EDA, but commonly Point to Point and Pub/Sub model is used.
1. Point to Point Model
The Point-to-Point model ensures reliable and direct communication between a single producer and a single consumer, enhancing transactional processing and delivery guarantees.
In this model, a producer sends messages to a specific queue. A consumer retrieves and processes these messages from the queue. The message broker manages the queue and ensures each message is delivered to only one consumer.
This model is particularly helpful for scenarios where each message needs to be processed by a single recipient, ensuring reliable message delivery and simplifying message routing.
Ex. SQS
2. Pub Sub Model in EDA
The Pub/Sub model is used in EDA to decouple producers and consumers, enhancing scalability and flexibility. It allows efficient, real-time communication by distributing messages through topics managed by an event broker.
In the Pub/Sub (Publish/Subscribe) model, a publisher sends messages to a specific topic. Consumers subscribe to this topic to receive and process the messages. The event broker manages the topics and ensures that messages from publishers are delivered to all subscribed consumers.
This is how the Pub/Sub model is helpful in making systems flexible and scalable by decoupling producers and consumers, allowing efficient real-time communication through managed topics.
🚀 AWS messaging services
AWS services useful in EDA include Amazon SQS, Amazon SNS, Amazon EventBridge, Amazon Kinesis, AWS MKS (Kafka), AWS Lambda, Amazon MQ (for Apache ActiveMQ and RabbitMQ), and many more.
Simple Queue Service (SQS)
AWS SQS is Queuing service that is useful in communication between Applications, microservices, and Distributed system.
Use Case of SQS ✴️
-
process asynchronous task
- Queues enable the processing of asynchronous tasks effectively. By using a queue, we can poll messages at any time, allowing for flexible task management and execution.
-
decoupling microservices
- When two services communicate via a queue, they are decoupled, eliminating direct dependencies. This allows each service to operate independently, enhancing system scalability and resilience.
-
batch processing
- SQS supports batch processing so we can do batch processing on queue messages and also optimise resource utilization.
-
job scheduling
- If we add messages to queue throughout days and want to process this all message’s at a time in day then we can schedule event and by polling mechanism we can do batch processing on all data which is inside queue.
Model / Mechanism
- works on Pull mechanism.
Consumers
- It supports only 1 consumer for 1 message.
- we can also consume message by AWS SDK or aws lambda as well.
Supports Ordering Mechanism
- Yes, it supports FIFO queue to process item in order.
Conditional Message Filtering
- SQS doesn’t support conditional filtering mechanism for Message.
Encryption
- supports message encryption using KMS
Throughput
- SQS standard queue has unlimited throughput
- SQS FIFO queue supports 3000 messages per second with batch processing or 300 message per second without batch processing.
Dead Letter Queue
- Yes, it supports Dead Letter Queue
Pricing 🤑
- $ 0.40 per 1 Million (Standard Queue)
- $ 0.50 per 1 Million (FIFO Queue)
- Data Outbound Charge
- $ 0.09 per GB
Simple Notification Service (SNS)
SNS (Simple Notification Service) is a fully managed pub/sub solution service offered by AWS. With SNS, users can create multiple topics and subscribers, and each topic can be connected with multiple subscribers.
Use Cases of SNS ✴️
-
Fan Out System
- Distribute a single message to multiple recipients efficiently.
-
Mobile Push Notifications
- Send real-time updates to mobile devices across various platforms.
-
System Monitoring Alerts
- Trigger alerts from monitoring tools based on specific events or thresholds.
-
Trigger Different Workflows
- Initiate diverse workflows by sending messages to various endpoints based on events.
Model / Mechanism
- works on Push mechanism.
Consumers
- It supports multiple consumer per message.
- it supports Kinesis Data Firehose, Lambda, SQS, Email, HTTP/s, Application Notification and SMS as consumer.
Supports Ordering Mechanism
- Yes, it supports FIFO topic to process item in order.
Conditional Message Filtering
- SNS supports conditional filtering mechanism for Message.
Encryption
- supports message encryption using KMS
Dead Letter Queue
- Yes, it supports Dead Letter Queue
Throughput
- SNS standard topic has near about unlimited throughput
- SNS FIFO topic supports 300 messages per second or 10 MB message per second per topic.
Pricing 🤑
- Standard Topic
- $ 0.50 per 1 M (Mobile Push Notification)
- $ 0.60 per 1 M (HTTP/s Request)
- $ 2.00 per 100,000 notifications
- No charge for SQS and Lambda
- $ 0.19 per 1 M Notification (Amazon Kinesis Data Firehose)
- Data Outbound Charge
- $0.09 per GB
- FIFO Topic
- Publish and publish batch API requests are $0.30 per 1 million and $0.017 per GB of payload data
- Subscription messages are $0.01 per 1 million and $0.001 per GB of payload data
Combined Use Case: SNS and SQS Integration 🤝
-
Fan-out with SQS:
- Use Case: Distributing a message to multiple queues for parallel processing.
- Example: An e-commerce platform needs to update inventory, process billing, and send a confirmation email when an order is placed. The order service publishes a message to an SNS topic, which then fans out to multiple SQS queues. Each queue is processed by different services responsible for inventory, billing, and email notifications, ensuring that the tasks are handled independently and concurrently.
Event Bridge
AWS EventBridge is a service that connects multiple AWS services based on events, facilitating event-driven architecture management. With EventBridge, you can send custom events from SaaS applications to an event bus, schedule tasks, and monitor AWS services. This enables seamless integration, automation, and real-time monitoring within your cloud environment.
Use Cases of Eventbridge ✴️
-
Building Serverless Event-Driven Architecture:
- AWS EventBridge allows the setup of a full event-driven infrastructure when using AWS services as both producers and consumers. AWS service events can act as sources, while AWS services can also be targets for event processing.
-
SaaS Integration with AWS Services:
- EventBridge supports custom events, enabling seamless SaaS integration. You can send custom events to an EventBridge event bus, facilitating communication between SaaS applications and AWS services.
-
Real-Time Monitoring and Alerting:
- EventBridge can monitor actions or events in real-time across various services. Based on these events, you can generate alerts or create CloudWatch logs, enhancing your system's observability and responsiveness.
-
Scheduling Tasks:
- EventBridge allows the scheduling of tasks using cron jobs or rate expressions. This enables you to automate the invocation of AWS services at specified times or intervals, ensuring timely execution of routine tasks.
Model / Mechanism
- works on Event bus model.
- uses Push mechanism to call target.
Consumers
- It supports multiple consumer per rule.
- We can set many aws services as target as well as also set HTTP/s endpoint if we want to call external API .
- Ex. invoke step function, start Glue Workflow.
Supports Ordering Mechanism
- No, there are no order guarantee
Conditional Message Filtering
- Event bridge supports event message filtering and transforming mechanism.
- We can define schema in schema registry and based on schema we can filter message.
- We can use Eventbridge pipe to filter and transform data.
Encryption
- doesn’t support message encryption using KMS
Archive and Event Replay
- we can archive events and replay them later when it needed.
Throughput
- Eventbridge has near about unlimited throughput for all aws service event.
- In all regions, PutPartnerEvent (used by SaaS provider to write event in eventbus) has a soft limit of 1400 requests per second and 3,600 burst requests per second by default.
Pricing 🤑
- EventBus
- Free for aws service event
- $1 per 1 million custom event or SaaS event or Cross account event.
- 64 KB of chuck is considered as 1 Event. (If data of payload is 150 KB consider as 3 event)
- EventPipe
- $0.40 per 1 million requests (event count after filter)
- Event Replay
- $0.023 per GB for event storage.
- $0.10 per GB for archive processing.
- Schema Registry
- Use of schema registry for aws and creating custom schema is free.
- $0.10 per million events (only discovery charge)
AWS Kinesis Datastream
Amazon Kinesis Data Streams (KDS) is used for real-time processing of streaming data at massive scale. when we need to process real time data processing & analytics at scale, Kinesis is useful.
Use cases of Kinesis ✴️
-
Real-Time Analytics:
- An e-commerce platform can use Kinesis Data Streams to capture and analyse clickstream data to understand user behaviour and personalise recommendations in real-time.
-
Log and Event Data Collection:
- We can use KDS to ingest and monitor application logs and system events to detect anomalies and react quickly to potential issues.
-
IoT Data Ingestion:
- Manufacturing companies can stream sensor data from IoT devices to monitor equipment health, predict maintenance needs, and optimise operations.
-
Financial Market Data Processing:
- Financial services can use KDS to process market data in real-time to detect trading opportunities and risks.
Model / Mechanism
- works on Data Stream modal
- works on Pull mechanism.
Consumers
- AWS Lambda, AWS Kinesis DataStream, AWS Kinesis Data Analytics, AWS Kinesis Data Firehose, KCL (Kinesis Client Library)
Archive and Event Replay
- Amazon Kinesis Data Streams' archive and replay features enable long-term data retention, fault recovery, and compliance by securely storing data in Amazon S3 and allowing for easy reprocessing.
Throughput
- On Demand Mode
- Read Capacity : Handle upto 400 MB per second
- Write Capacity : Handle upto 200 MB/second and 2,00,000 records/second
- Provisioned Mode ( for 1 shard only)
- Read Capacity : Maximum 2MB per second
- Write Capacity : 1 MB/second and 1,000 records/second
Drawbacks
- Shard need to be managed manually.
Pricing 🤑
- $ 0.015 per hour per shard (Provisioned Mode)
- $0.04 per hour per shard (On Demand Mode)
- $ 0.014 per 1 million PUT payload units
- NOTE : There are other charges as well like Data Retention and data fan out etc..
Apache Kafka (AWS MKS)
Apache Kafka is a distributed, fault-tolerant, reliable, and durable streaming platform used for real-time data pipelines. Initially developed by LinkedIn, it later became open-source.
Kafka boasts high throughput and low latency, making it especially useful when data consistency and availability are crucial. It efficiently handles large volumes of data, enabling organizations to build robust, real-time data processing and analytics systems. Kafka's architecture supports horizontal scalability, ensuring that it can grow with the needs of the application.
Use Cases of Apache Kafka ✴️
- Real-time analytics and monitoring
- Event sourcing and event-driven architectures
- Log aggregation and processing
- Stream processing and transformation
- Data integration and ETL (Extract, Transform, Load) pipelines
Model / Mechanism
- Pub/sub Model
Consumers
- AWS lambda,Kinesis Data Analytics, Kinesis Data Firehouse, EMR, Glue are connected to MSK directly while S3, DynamoDB, Redshift can be connected via Kafka Connect.
Supports Ordering Mechanism
- Yes, there are order guarantee
Conditional Message Filtering
- It doesn’t support message filtering at broker level, we have to handle filtering at consumer level.
Encryption
- supports message encryption using KMS
Archive and Event Replay
- AWS MSK retains all published messages for a configurable retention period.
Throughput
- Kafka provides high throughput, also capable of handling large volumes of streaming data with low latency.
Pricing
- Broker Instance charges
- $0.204 (price per hour for a kafka.m7g.large)
- $0.21 (price per hour for a kafka.m5.large)
- Storage charge
- $0.10 (the price per GB-month in US East region)
📗 Very Important to know when to choose which service ?
-
Asynchronous job processing: Use Amazon SQS. Ideal for decoupling microservices and buffering requests.
- If numbers of event per second is lower or medium then SQS is always recommendable service
- Sending notifications or invoking services: Use Amazon SNS. Perfect for sending notifications to multiple recipients or invoking services with pub/sub messaging.
-
Triggering services based on events: Use Amazon EventBridge. Best for integrating AWS services and custom applications through event-driven architectures.
- Eventbridge is highly recommend when SaaS integrate to AWS services is requirement.
-
Handling high request rates and event-driven data ingestion: Use Amazon Kinesis. Suitable for real-time data streaming and analytics with the ability to scale by adding shards.
- Kinesis is costly service compare to other service as it’s cost depends on number of active shards.
-
Live streaming and scalable, low-latency data pipelines: Use Amazon MSK (Managed Streaming for Apache Kafka). Excellent for building scalable, real-time data streaming applications with Apache Kafka.
- Apache Kafka is highly recommended when millions or billions of request occurs at time.
➕ Additional Considerations
- Message Ordering and Deduplication: If you require strict message ordering and deduplication, consider using SQS FIFO Queues, Kinesis or Kafka.
- Multiple Consumer Support: For scenarios where multiple consumers need to process the same stream of data, SNS , Kinesis, Kafka is preferred.
- Complex Event Processing: For applications needing complex event processing and routing, EventBridge provides advanced capabilities for rule-based event handling.
Summary Table
Feature | SQS | SNS | Event bridge | Kinesis | Kafka (MKS) |
---|---|---|---|---|---|
Message Filtering | No | Yes | Yes | No | No |
Order | Yes (FIFO Queue) | Yes(FIFO Topic) | No | Yes | Yes |
Throughput | Low (FIFO) | Medium | Medium | High | High |
Latency | Medium | Low | Low | Low | Low |
Durability | High | High | High | High | High |
Integration | AWS Services | AWS Services | AWS Services | Custom Applications & AWS Services | Custom Applications & AWS Lambda |
SaaS support | No | No | Yes | No | No |
Data Persistence | Yes | No | Yes | Yes | Yes |
pricing | Low | Low | Low | High | High |
Top comments (0)