Amazon DynamoDB was internally developed in 2006 to manage workloads for the online Amazon store, DynamoDB was officially launched to the general public in 2012 as a fully managed, serverless non-relational(NoSQL) database service for fast and reliable performance e.g. internet performance at scale for online stores.
On 1 March, Amazon celebrates a decade of innovating with DynamoDB, there is a full of learning with special guest appearances including Amazon Chief Evangelist Jeff Barr, AWS Vice President of Analytics, Database and Machine Learning Dr Swami Sivasubramanian, AWS Data Hero Alex DeBrie the author of DynamoDB book and Jeremy Daly who is an AWS Serverless Hero and DynamoDB modeling instructor.
You may watch the replay from Severless Land: A decade of innovation with DynamoDB
You may register and view the agenda at this link here and watch the celebration and day of learning with live streaming from Twitch.
After experiencing e-commerce platform outages from the online store Amazon.com and upon conducting a root cause analysis, former Amazon Intern Dr Swaminathan Sivasubramanian who is now the Vice President of Analytics, Databases and Machine Learning noticed that relational databases were an inefficient solution.
Other observations included:
- Relational databases only store and retrieve data with a primary key and cannot handle internet scale with millions of requests per second from Amazon Prime Days
- Replication technologies are not highly available
- Databases could not scale or use partitioning for load balancing
In 2007 Peter Vosshall presented Dynamo: Amazon's Highly Available Key-value Store at the prestigious ACM Symposium on Operating System Principles (SOSP), a tehnical paper that was co-authored by Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall, Werner Vogels that proposed that DynamoDB would be a novel solution that included:
- Highly available key-value data storage system
- Object versioning
- Efficient in resource usage
- Simple to scale as the data size grows or API requests increase
You may download the Amazon research paper here
a) Performance at scale:
- Data and traffic for tables is spread across servers to achieve microsecond latency with in-memory cache system
- Key-value model
- Global replication of global tables
- Amazon Kinesis Firehose can deliver Dynamo data to other AWS Services
- Database can be scaled on demand and increment with auto scaling
- Read/write capacity
c) Enterprise ready:
- ACID transactions support
- Encrypts customer data at rest
- Point in time recovery of accidental deletion of data from tables
- Database type: Nonrelational(NoSQL)
- Structure: JSON, key-value,graph-based, document and column
- Schema: Dynamic
- Scale: Can run single-digit millisecond performance and at any scale. You can start small and then increment. Can handle petabytes of data
- Performance: built for online transaction processing (OLTP) at scale e.g. e-commerce platform for online stores
- Optimization: Optimized for read or write
- Query type: Simple queries
- Database type: Relational (SQL)
- Structure: Tables with rows and columns
- Schema: Predefined
- Scale: Limited
- Performance: good for online analytical processing (OLAP)
- Optimization: Optimized for storage
- Query type: Real-time, complex queries
Amazon DynamoDB's design is based on architecture that is highly decentralized, loosely coupled service architecture that links multiple AWS services and data storage is always available.
DynamoDB's tables can be global and replicated across multiple regions.
DynamoDB reflects Amazon engineers' continued focus on the Well-Architected Framework Performance Efficiency pillar to incorporate design considerations that include:
a) Democratize advanced technologies for greater flexibility to configure the data store
b) Go global in minutes with high availability
c) Use serverless architectures to achieve reliability,consistency, cost-effectiveness and performance
d) Experiment more often to decide when to resolve any update conflicts during read or write to deliver on customer experience
Hence DynamoDB has a writeable data store with an 'always on' customer experience to manage different states of service.
Please find below the Service-oriented architecture of Amazon's platform to render web page content from client requests delivered with a response rate of 300 milliseconds for 99% of requests in the technical paper published in 2007 in Figure 1
In the 'This Is My Architecture' episode introducing heart-rate monitoring workout at Orangetheory Fitness: Using Chime SDK to Power Coach-led Virtual Workouts with OTlive. Other AWS Services such as AWS Cognito together with AWS API Gateway work to enhance the member and coaching experience with the ability to retrieve infrequently accessed historical member data when required using
DynamoDB with low latency and high performance.
Other use cases for Amazon DynamoDB include:
- Super Bowl advertising
- AWS Prime Day shopping online
- Website clicks
- e-commerce order history
- Online gaming
- Store time-series data
- Social Media posts e.g. Tweets
At AWS re:invent 2021 the DynamoDB key announcements included:
a) Amazon DynamoDB Standard-Infrequent Access (Standard-IA)
What's new in DynamoDB? with speaker: Chad Tindel, Principal NoSQL Solution Architect for Amazon
Amazon DynamoDB Standard-Infrequent Access(Standard-IA) table class is a new product feature that was introduced at 2021 AWS re:invent helps you reduce your DynamoDB costs by up to 60 percent for tables that store infrequently accessed data.
The DynamoDB Standard-IA table class offers lower storage costs compared to the DynamoDB Standard tables, making it the most cost-effective option for tables used for storage. The DynamoDB Standard table class offers lower throughput costs than the DynamoDB Standard-IA table class and you can always switch back to your preferred table class and not compromise on table performance, availability or durability.
Amazon DynamoDB Standard-Infrequent Access launch recap with the team at re:Invent 2021 with speakers Pete Naylor - Senior Tech Product Manager for Amazon DynamoDB and Elie Gharios, Senior Tech Product Manager for Amazon DynamoDB
Do you require long term storage of your data?
Check if you have a good use case to use NEW table class Amazon DynamoDB Standard-Infrequent Access (Standard-IA) to save your company some money compared to the standard table class for DynamoDB
Stephanie Gooch wrote in her blog here things to consider for your business use case:
a) Does your storage exceed 50% of your throughput cost (reads and writes). This usage pattern is a good indicator that you have storage that you are not reading or writing on a regular basis.
b) If you have ETL to archive data to services like Amazon S3. Rather than continuing to move data, it can be kept in the table at a lower cost.
c) If you use rolling tables (creating new tables day/month). In this case, the older tables may be less frequently read; therefore, perfect for infrequent access.
There are two tools that you may use to evaluate and support your use case to use this new feature for Amazon DynamoDB Standard-Infrequent Access (Standard-IA):
b) AWS meets compliance and business continuity requirements with AWS Backup
Announced in November 2021, AWS Backup supports DynamoDB with full backups of DynamoDB tables without compromising performance and high availability. This new feature supports compliance and business continuity for industries such as finance and banking that require an audit trail to meet regulatory obligations and might have been reluctant to use DynamoDB in the past.
Tutorial: Create an Amazon DynamoDB table using new table class Amazon DynamoDB Standard-Infrequent Access (Standard-IA)
Free Tier: If you do not have an existing AWS account, you may be able to take advantage of Free Tier and access Amazon DynamoDB for free for the first 12 months with up to 25 GB of storage.
Step 1: Sign in to the AWS Management Console with your IAM user credentials
Step 2: Type the word DynamoDB into the navigation bar and click DynamoDB
Step 3: On the new DynamoDB console, click Create table
Step 4: Provide a name for the table and type in the box Tennis
Step 5: The partition key is used to spread data across partitions for scalability. It’s important to choose an attribute with a wide range of values and that is likely to have evenly distributed access patterns. Type the word TennisPlayer in the Partition key box.
Step 6: Because each tennis player may compete in many competitions throughout the year, you can enable easy sorting with a sort key. Select the Add sort key check box. Type CompetitionName in the Add sort key box.
Step 7: Tables will be created under the default setting for a DynamoDB standard table class.
Step 8: Select Customize Settings to update the default settings
Step 9: Select the new table class DynamoDB Standard-IA if your storage is greater than 60% and the table is infrequently accessed
Step 10: Estimated costs capacity is shown below when choosing On-demand for new tables with unknown workloads and unpredictable traffic.You may inspect the estimated cost and adjust the inputs for the read, write and also average item size.
Step 11: And the final step is Create Table
It will take a few seconds for the table to be created and you will receive a banner message that the status of the job has been successful.
Step 12: To add data to the Tennis table you may follow this tutorial
Step 13: This is an optional step for AWS Back up integration with DynamoDB announced in November 2021 that you can create on-demand backup of tables if you need to maintain data for compliance or business continuity e.g. an end of year tax audit.
Select the option On demand backup
Step 14: Search for the name of the existing table Tennis that was previously created.
Step 16: Click Create Backup
Note: Ensure you have already created an IAM role for service role AWSBackupDefaultServiceRole to perform AWS Backup.
You may inspect what the permissions allow you to do within the IAM policy with AWS Backup such as Createbackup and DescribeBackup within DynamoDB:
Step 17: The table Tennis is successfully backed up with AWS Backup integration with DynamoDB
DynamoDB deep dive: Advanced design patterns
Amazon DynamoDB: Driving innovation at any scale
Data Modeling with DynamoDB
Happy Learning! 😁