DEV Community

Cover image for AWS Kinesis - Stream Storage Layer

AWS Kinesis - Stream Storage Layer

In this blog post, we will discuss the AWS Kinesis data stream service to understand the high-level overview of the service, architecture, core components, and the use case of the AWS Kinesis service.

AWS kinesis has the following sub-services:

  1. AWS kinesis Data Stream (KDS)
  2. AWS Kinesis Data Firehose
  3. AWS Kinesis Data Analytics
  4. AWS Kinesis Video Stream

Kinesis Layers

Our primary discussion would be around the Kinesis Data Stream the stream storage layer, will discuss the overview, the architecture, and other necessary details.

AWS Kinesis Data Stream (KDS)
KDS is a scalable service that scales elastically and near-real-time processing of streaming big data. It's a data ingestion layer that stores the value from 24 hours up to 8760 hours (365 days), by default it's 24 hours. Data inside the KDS is immutable once stored cannot modified, and the stored data cannot be removed from it until it expires.

The KDS is composed of two layers.

  • Storage Layers
  • Processing Layers

1. Storage Layers
Is responsible for storing and managing the incoming data stream temporarily before it goes for further processing in the processing layer.
2. Processing Layer
This layer is fed by the storage layer and is responsible for analyzing and transforming the data in real-time or near-real time. After processing the processing layer is responsible for notifying the storage layer to delete the data that is no longer needed.

Kinesis Data Stream - Architecture
A KDS is composed of the components which we will discuss one by one and how they correlate with one another.
A KDS is composed of one or more shards.

Shards: contains the sequence of data records that supports 5 transactions per second, The total data write rate of 1MB/S or 1000 messages per second whereas the data read rate is 2 MB/S.

Data Records: is composed of the Sequence ID, Partition key, and data blob.

Inside Shard Story

The partition key inside the data records decides to which shard the data will go and the blob is nothing but the original data itself.

Note: The sequence ID will be unique inside the partition.

AWS Kinesis Data Stream - Architecture by [AWS](https://www.amazon.com/)

The producer will put the records into the data stream.
Consumers will get the records from the data stream also known as the KDS applications.

The consumer applications generally run on a fleet of EC2 instances. There are two types of consumers in KDS:

  1. Classic/Shared Fan-out consumers (SFO)
  2. Enhanced Fan-out consumers (EFO)

The SFO works on the Poll/Pull mechanism where it extracts the records from the shard, whereas the EFO works on the push mechanism the consumer subscribes to the shard and the shard automatically pushes the data into the consumer application.
The default throughput of each shard in 2MB/s in shared fan-out all the consumers will share the same throughput of 2 MB/S but in Enhanced Fan-out each consumer will receive its own throughput of 2 MB/S. Suppose we have 5 consumers all of them are reading the data from Shard1 then in SFO the 5 consumers will get the throughput of 2 MB/s but in EFO the 5 consumers will get the 10 MB/S as each one will have separate 2 MB/S.

EFO vs SFO characteristics
The EFO has a latency of 70ms which will be the same for all the consumers while the SFO has around 200ms and will increase with each consumer for example if there are 5 consumers then in EFO the latency will remain at 70ms but in EFO the latency can increase up to 1000ms. In EFO the maximum consumer limit can be up to 20, in SFO the limitation is up to 5 consumers. The cost of EFO is also higher compared to the SFO. The records delivery model in SFO is using HTTP while in EFO HTTP/2 is used.
Now we get the high-level overview of the Amazon Kinesis Data Stream service. Let's discuss the pricing model of the KDS now.

Pricing KDS
The following are the points you should consider while using the KDS service for which you'll be charged.

  • There's an hourly charge incurred based on the number of shards.
  • Separate charges when the producer puts the data in the stream.
  • Charge based on per hour, when the data retention period is extended from the default 24 hours.
  • If Enhanced Fan-out is being used, charges are based on the amount of data and the number of consumers.

Resources:

Top comments (0)