This post was originally posted on my website at GeradeGeldenhuys.net
Caching is one of those things that in most cases, are probably inevitable with any project, especially if your project is a web application. It is the process of storing frequently accessed information in a cache. A cache is a temporary storage area, usually in memory. This sounds pretty trivial if you have your application and run it on a single server, but in this cloud-first, auto-scale landscape we are living in, suddenly it’s not as simple. For this, we look to a different, but same, solution. This is we call Distributed Caching.
The cache is structured around keys and values – there’s a cached entry for each key. When you want to load something from the database, you first check whether the cache doesn’t have an entry with that key (based on the ID of your database record, for example). If a key exists in the cache, you don’t do a database query.
A distributed cache is as its name says, caching with the added benefit of having it distributed across multiple servers. The advantages of having a distributed cache are being comfortable in knowing that your data is coherent – that it is consistent across all nodes of your application. This way, you can also ensure that that app restarts, or in the case of having to restart your app server will not result in the loss of your caching data. In the cloud crazed world, we live in today, this is a no brainer for any application looking to implement a reliable caching strategy.
Either this or have all your data saved in a single server with tons of memory likely to die on you at a moment's notice and have your smooth-running application thrown into disarray. Your choice.
There are many different ways to implement this in our ASP.NET microservice (Memcached, Redis, Cassandra, Elasticache, etc). For this post, I will be using Redis. Regardless of which implementation you choose, the app interacts with the cache using the IDistributedCache interface.
With the rise of Docker, we no longer have to install 3rd party applications we want to interact with from our applications. In the past, we would have to download and install Redis and go through the entire process of setting it up and so on. But today, it is easy as pulling a Docker image and running it. Yes, 2 simple commands, as shown below.
C:\> docker pull redis
C:\> docker run -p 6379:6379 redis
Once we have Redis up and running, we want to be able to interact with it. Redis Desktop Manager is one way of doing it with one limitation that it sits behind a paywall. If that is not an option for you, you can try out Redis Commander, a free to use Redis management tool written in Node. You can also run Redis Commander in a Docker container.
C:\> docker pull rediscommander/redis-commander
C:\> docker run -p 8081:8081 rediscommander/redis-commander:latest
Once it is up and running, go ahead and navigate to it in your browser at port
In my MedPark 360 application, I am developing, we give patients the ability to order medication online and have them delivered to their address. This requires a product catalogue and everything that goes with it. One such feature would be a cart, your everyday run of the mill cart where you could add products to it before checking out. Nothing special about it. Currently, when a user requests their cart, we make a request to the database to retrieve it. This is a bad implementation because every time a user requests their cart, it will incur a database hit. This is not optimal.
So, what we can do here is to implement caching and reduce the calls to our database and improve the performance/responsiveness of our application.
We first need to add Redis to our application, then we need to set up a service to handle saving and retrieving our cached data from Redis. Once we have our service, we can create a Filter in the form of an attribute to handle the response caching. I created the following extension method to add Redis the Basket service.
Once that is set up in our service, we want to implement a service that will be responsible for interacting with Redis. The service is the IResponseCacheService I added to DI when adding Redis above. Below is the implementation:
This service is pretty as straightforward. It has one method to search our cache for a value that matches the key we pass in. It also has a method that will cache a response if it is not null.
Now that we have our service to handle the interaction with Redis, we need to invoke it. As a refresher, when a user requests their cart, we want to retrieve it from the cache, if it doesn’t exist in the cache, we are more than happy to hit up our database to get it. But, once we have it, we want to store in our cache so that subsequent requests can get from the cache and we avoid asking our database for it. This is where the Filter comes into play.
The general idea of a filter that we want some custom code to run before and/or after specific stages in our request pipeline. Below we have an attribute we can apply to our endpoints on our controllers to take advantage caching.
The above code is pretty straightforward. We initialize the attribute with a TTL (Time To Live) for the cached value. This indicates for how long we want the data to be cached for before we want it to go stale and expire. Next, we get the settings for Redis on the particular service, in this case, the Basket service, and before we continue, we want to see if caching has been enabled for this service. If not, return and continue on the request pipeline. If caching has been enabled, fire up the service we created for interacting with Redis and generate a unique key based on the request. Once we have the key, we know what to look for in the cache. We then use the key to get the data from Redis. If the data is available and has not expired in Redis we can be certain that this data is still correct and may use and return it to the user.
If the information does not exist on Redis we continue on the request pipeline to the controller. The controller method will then request the data from the database and return it. At this point, we will return to our filter (Attribute) and save this information to Redis using the key we generated.
In conclusion, caching is a very simple way to improve the performance of your services. For the scenario above, without caching, the request gets handled in about 40ms on my machine. Once I enabled caching that number dropped down to around 10ms.
The source material for this post can be found on GitHub. This is an application I am actively developing, so if the source code for this post is not there please bear with me as it probably has not been merge yet.
If you would like to read up on more Caching strategies, I would suggest this blog post by Nick Craver, the Architecture Lead at Stack Overflow and the rest of Stack Exchange on how they deal with app caching on such a huge network of services.