Each month I'll be taking a trip down memory lane and showcasing some classic data posts. Some of these might be your 'go-to' resources already, others may offer some new insight or ideas.
Let's jump into the posts!
First up James writes about building an e-commerce data model that’s scalable, flexible, and fast. This post shows what it takes to start building this infrastructure on your own. What are some of the areas to consider? What might the data model look like? How much work is involved?
This classic post from Viach focuses on scanning a large table with 100 000 000 records using OFFSET with a primary key (keyset pagination). Check it out for three different approaches that might be right for your next project.
Next up is a post from Molly who writes about how to tackle the common issue of giving engineers access to the data they need to do their jobs while keeping sensitive data secure. Read more for how the Forem team solved this very problem.
This classic post from Matthew shows you how a database index works 'under the hood'. We don’t all have to be DBAs to write sufficiently fast queries and we shouldn’t need to be. As developers, getting familiar with the core structures of a database is a sufficiently pragmatic way to spot and improve performance.
Our last post is from Ron Soak with lessons and learnings from building a Redshift specific VS Code syntax highlighter from scratch. Check it out for more on the process and if you're a Redshift user check out the extension too.
That's all for this month! For more from the Data Community check out the #sql, #postgres, #mysql, and #database tags, and follow @TheDatabaseDev on Twitter.