Hey everyone!!! 👋
One of my first projects as DEV's newly minted SRE is to move all of our existing cache keys to Redis and I would love your help! Click on the Issue linked below to learn more!
There are lots of details in there about why we are moving to Redis, how we plan to do it, a guideline for helping out, and even a few example PRs to get you going. I think this is a GREAT opportunity for anyone looking to dip their feet in the open-source waters. Let me know if you have any questions and feel free to tag me(@mstruve on github) or the SRE team in your PR when you open it!
Happing coding 😃
Move All Cache Keys From Memcache to Redis #4670
Why We Are Moving To Redis
Memcache is a bit of a black box when it comes to caching. It is very hard to tell what is in it and how we are using it. For this reason, we have chosen to switch to using Redis for our application cache. Redis will allow us more insight into how we are using the cache and give us more control over our cache keys. It also has nice features such as background deletion and Lua scripts to handle complicated logic. Plus, if we ever want to use a worker/job service such as Sidekiq it can act as a datastore for that.
Strategy For Moving Keys
In order to move to Redis, we will be doing it one key at the time for the most part. This will ensure that a large cold cache doesn't slow things down. This will also help us to evaluate how we are using the cache and ensure that each use is still needed.
In this PR I set up a second cache store
RedisRailsCache that behaves just like Rails.cache but instead points to Redis. The goal will be to replace all Rails.cache calls with RedisRailsCache and then eventually switch Rails.cache to point at Redis.
How You Can Help!
There are a lot of cache keys that need to be moved and I would rather move them one by one to minimize risk which is why I would love help with this project. Feel free to grab a cache key and roll it over to Redis!
Considerations for moving a key
These are just a general guide to get you thinking as you are writing the code. If you are unsure about any of these, open a PR and let the DEV core team weigh-in. Feel free to tag me(mstruve) or the SRE team in your PR when you open it!
- Can this cache safely become cold?
- Does this cache key make sense? Is there a better/more clearer one?
- Should this be a cache at all? There might be places where we can live without a cache and simply removing it might be the best move rather than switching it to Redis.
- Does the current expiration make sense?
When moving any keys with a timestamp in them, can you please reformat the timestamp to use
.rfc3339 so we dont have awkward spaces in our cache keys. Please and THANK YOU!
Moving Keys: https://github.com/thepracticaldev/dev.to/pull/4684 https://github.com/thepracticaldev/dev.to/pull/4690 (note there is a corresponding delete command!) Removing Keys: https://github.com/thepracticaldev/dev.to/pull/4689
Latest comments (9)
contributed a couple of PRs before I leave work. Hope they help :)
thank you very much Tariq!
You're welcome :)
Well we have been just about the top trending Ruby project on GitHub consistently since going open source a year ago, so on some level we are.
But I think there's a natural bottle neck where it's hard to provide guidance where to contribute sometimes when the core team itself is busy with things to do which require a lot of context.
We had over 500 pull requests opened on the project this past month, so there is plenty, but I think if we play our cards right and continue to build towards a project much bigger than just the single deployable instance of dev.to we will, in fact, reach that point of thousands and thousands of simultaneous contributors.
Are there any other instances running the DEV code base as their backend? Or is the project still very DEV'd at the moment?
Very DEV'd at the moment but we're actively working with early adventurers who want to run the code for their own communities.
We now have a GitHub label for tasks which get us closer to generalization. As you can see we closed a couple issues just this week 🙂