DEV Community

Gael Fraiteur for PostSharp Technologies

Posted on • Originally published at blog.postsharp.net on

MemoryCache in C#: A Practical Guide

MemoryCache is the standard implementation of an in-memory cache for C# applications. It helps boost performance and scalability at the cost of increased memory consumption. This article covers practical questions about when and how to use MemoryCache.

When Should We Use In-Memory Caching?

In-memory caching is beneficial for virtually all types of applications: mobile, server- or client-side web, cloud, and desktop. It’s especially useful when a computation or request is resource-intensive or time-consuming and is likely to be reused later. Consider caching as your first line of defense against hefty computation and cloud consumption costs.

As a general guideline, an operation should be cached if it exhibits at least three of the following properties:

  • Small in size, so it does not consume too much memory.
  • Frequently used.
  • Expensive to obtain.
  • Stable over time (doesn’t change often).

For example:

  • A product catalog is a great candidate for caching because it does not change often, has a small size, and typically requires dozens of database calls to build.
  • An order list is large and changes often, so it probably should not be cached.

A Practical Example in ASP.NET Core: Retrieving Currency Data Without Caching

To illustrate the use case for caching, we’ll use a small ASP.NET Core application that displays the prices of two fiat currencies and two cryptocurrencies.

Our goal is to display current Bitcoin and Ethereum price data in USD to the webpage user. So, we query the API that provides live data from CoinCap and render the data for the user.

The GetCurrencyData method from the model class retrieves its data from an HTTP service. The IHttpClientFactory dependency is pulled using the primary constructor.

namespace Sample1.Pages;

public class NoCachingModel( IHttpClientFactory httpClientFactory ) : BaseModel
{
    public async Task<CoinCapData> GetCurrencyData( string id )
    {
        var response = await httpClient.GetFromJsonAsync<CoinCapResponse>(
                          $"https://api.coincap.io/v2/rates/{id}" );

        return response!.Data;
    }
}

Enter fullscreen mode Exit fullscreen mode

The list of currencies is defined in the base model class.

namespace Sample1.Pages;

public class BaseModel : PageModel
{
    public IReadOnlyList<string> Currencies { get; } =
    [
        "bitcoin",
        "ethereum",
        "euro",
        "british-pound-sterling"
    ];

}

Enter fullscreen mode Exit fullscreen mode

And here is the code of the page that displays the currencies:

@page "/"
@using System.Globalization
@model NoCachingModel
@{
    ViewData["Title"] = "Home page";
}

<div class="text-center">
    @{
        foreach (var currency in Model.Currencies)
        {
            var data = await Model.GetCurrencyData( currency );

            // Output a paragraph with symbol and rate.
            @:<p>@( data.Symbol )(@data.Type): 
           @data.RateUsd.ToString("F3", CultureInfo.InvariantCulture )</p>
        }
    }
</div>

Enter fullscreen mode Exit fullscreen mode

As an aside, we’ve included code using ASP.NET middleware to monitor the time it takes to render the page. This will display the internal processing time on the page, which we will reference throughout the article:

processed in 397 ms.
Enter fullscreen mode Exit fullscreen mode

However, every view of our page triggers four data retrievals from the service. This leads to several problems:

  • Latency. The page load time is around 400 ms in my location. This latency can add up, potentially increasing the load time to several seconds if left unaddressed.
  • Costs. Our server uses the API, which could result in higher cloud hosting costs.
  • Reliability. Data providers typically enforce rate limits. In our case, Coincap.io allows roughly 200 requests per second, which may seem high. However, our page makes four requests per page load, and a spike of 50 requests per second is plausible.

We can tackle all these issues by employing MemoryCache.

How Fast is MemoryCache?

MemoryCache is very fast. The cached data resides in the same process that uses them and is stored as regular .NET objects. All MemoryCache does is store a reference to your object, associate it with a key and prevent it from being garbage-collected.

In contrast, with distributed caching servers like Redis, the cache is located on a different machine, often a dedicated one, and is shared among multiple servers. A distributed cache does not store .NET objects but binary or text (often JSON or XML) data. Therefore, .NET objects should be serialized before being stored in the cache, and deserialized after being retrieved. With MemoryCache, serialization and deserialization is not necessary.

So, how fast is MemoryCache? On my development notebook, I measured:

  • TryGetValue.TryGetValue takes ~50 ns.
  • TryGetValue.Set takes ~110 ns if the item is overwritten or ~100 ns if the item is new.

That means that you should not bother about the performance of MemoryCache even if your app is serving tens of thousands of requests per second and per instance.

Is MemoryCache the fastest cache for C#?

Even if it’s very fast, MemoryCache is not the fastest cache for C#. However, it provides a good balance between features and performance.

  • If you never want to remove anything from the cache, ConcurrentDictionary<string,object> is faster. However you risk running out of memory unless you are caching a small dataset.
  • To cache even more performance-critical data, building the string-based cache key may become the performance bottleneck. Consider using a ConditionalWeakTable, use lazy initialization, or the Memoization pattern.

What are the Disadvantages of In-Memory Cache?

  1. Cache incoherence in distributed apps. If your application is deployed to several instances, you will have several in-memory caches, each in a different machine, and they need to be synchronized. One node may not be aware of the changes done by the second node, and even if the database is consistent, it will not get to the database because of its own local cache. Incoherent caches can result in users seeing different data from different servers. This is one of the problems addressed by distributed caching.

  2. Cache invalidation. As with every caching implementation, adding data to a MemoryCache and getting it back is the easy part, but removing it when it is no longer valid can be notoriously challenging.

  3. Danger of mutable objects. Since MemoryCache does not serialize/deserialize objects, a frequent mistake is adding mutable objects to the cache. You must only add immutable objects to the cache, or you prepare yourself for hours of painful debugging.

How to Add MemoryCache to an ASP.NET Core App?

The MemoryCache class is included in the Microsoft.Extensions.Caching.Memory package. This package is automatically included in ASP.NET Core applications. If you are building a different kind of applications, you may have to add this package to your project. Typically, we would not write code directly against the MemoryCache class but against IMemoryCache, its abstraction.

At the lowest level, the IMemoryCache interface provides the following basic methods:

  • ICacheEntry CreateEntry(object key) - adds an entry into the cache for the given cache key. If the entry already exists, it’s overwritten.
  • bool TryGetValue(object key, out object? value) - retrieves the value stored in the cache for the given key, if it exists.
  • void Remove(object key) - removes an entry for the given key.

Most of the time, we will not use these basic methods in our code, but instead the high-level GetOrCreate or GetOrCreateAsync extension methods. Given a cache key, these methods will:

  1. Check if the cache contains the result we are interested in. If yes, return it to the caller.
  2. Invoke the logic to get the data.
  3. Add the data to the cache.

Step 1. Initializing MemoryCache

To start, we need to add the memory cache component into our service collection.

In Program.cs, add the memory cache to the WebApplicationBuilder as follows:

builder.Services.AddMemoryCache();
Enter fullscreen mode Exit fullscreen mode

Step 2. Injecting IMemoryCache

The next step is to inject the IMemoryCache into your page or model.

If you are adding code directly to the page, inject the IMemoryCache into our page model using the following directive:

@inject IMemoryCache Cache
Enter fullscreen mode Exit fullscreen mode

If you are writing pure C# instead of Razor, pull the IMemoryCache service.

public class WithMemoryCacheModel(
    IHttpClientFactory httpClientFactory, IMemoryCache memoryCache ) 
    : BaseModel
Enter fullscreen mode Exit fullscreen mode

Step 3. Add the caching logic to your methods

Now, you can add the caching to our data-retrieval method by adding a call to IMemoryCache.GetOrCreateAsync.

A convenient pattern is to move the original method logic to a local function, like in the following code snippet.

public async Task<CoinCapData> GetCurrencyData( string id )
{
    return (await memoryCache.GetOrCreateAsync(
        $"{this.GetType().Name}.GetCurrencyData({id})",
        _ => GetData() ))!;

    async Task<CoinCapData> GetData()
    {
        var response = await httpClient.GetFromJsonAsync<CoinCapResponse>(
                 $"https://api.coincap.io/v2/rates/{id}" );

        return response!.Data;
    }
}
Enter fullscreen mode Exit fullscreen mode

Each time the method is called, the cache is checked. If the data is available, it’s retrieved. If not, the local function is invoked, and the result is stored in the cache. When users visit our site, the first request populates the cache with the results of GetData, and all subsequent requests use this data. This approach significantly reduces user latency, lowers CPU usage, and decreases external API usage. As a result, we improve the user experience, save on costs, and enhance reliability, all for a small memory trade-off.

It’s important to note that all method parameters should be added to the caching key. Additionally, we need to ensure that our method’s cache key does not collide with the cache keys of other methods.

Running the app, we can see that the local latency of our example has now dropped to 1 ms, down from its previous 397 ms.

processed in 1 ms
Enter fullscreen mode Exit fullscreen mode

How to Expire Cache Entries?

In the example above, we successfully improved the service’s latency and reliability. However, we inadvertently made it useless because it now displays constant prices. We’ve added data to the cache, but we haven’t implemented any mechanism to refresh the data. This mechanism is referred to as expiration or expiry.

Let’s say we want the data to be at most 30 seconds old.

In MemoryCache, the mechanism we’re after is called absolute expiration. We can set this by calling the SetAbsoluteExpiration method of the ICacheEntry object that GetOrCreateAsync provides.

public async Task<CoinCapData> GetCurrencyData( string id )
{
    return
        (await memoryCache.GetOrCreateAsync(
            $"{this.GetType().Name}.{id}",
            async entry =>
            {
                entry.SetAbsoluteExpiration( TimeSpan.FromSeconds( 30 ) );

                return await GetData();
            } ))!;

    async Task<CoinCapData> GetData()
    {
        var response = await httpClient.GetFromJsonAsync<CoinCapResponse>(
                       $"https://api.coincap.io/v2/rates/{id}" );

        return response!.Data;
    }
}

Enter fullscreen mode Exit fullscreen mode

Now, MemoryCache will delete the entry approximately 30 seconds after its creation. Most requests will still have a latency of 1 ms, but one request every 30 seconds will be slow as it needs to hit the currency service.

How To Reduce Strong Coupling of Your Code With Caching?

Hardcoding your application to use IMemoryCache is not always the best approach. If there is a chance that you might want to change the caching strategy in the future, consider using an intermediate layer between your source code and the caching component.

Polly is a popular .NET package that helps handle transient faults and enhances application resilience. One of the available policies is Polly.Caching.Memory, which internally uses IMemoryCache. Polly offers several other useful policies. For instance, you might want to retry the operation if it fails. With Polly, this is simple. Polly also makes it easier to switch from MemoryCache to a distributed cache.

Let’s see how we can use Polly to add caching to your app.

Step 1. Adding Polly and configuring a caching policy

To set it up, we need to add the following code during application initialization:

builder.Services.AddSingleton<IAsyncCacheProvider, MemoryCacheProvider>();
builder.Services.AddSingleton<IReadOnlyPolicyRegistry<string>, PolicyRegistry>(
    serviceProvider =>
    {
        var cachingPolicy = Policy.CacheAsync(
            serviceProvider.GetRequiredService<IAsyncCacheProvider>(),
            TimeSpan.FromMinutes(0.5));

        var registry = new PolicyRegistry {
                              ["defaultPolicy"] = cachingPolicy };

        return registry;
    });

Enter fullscreen mode Exit fullscreen mode

Step 2. Inject the policy into your page, model or service

Next, we need to incorporate the resilience policy into our model:

private readonly IAsyncPolicy _cachePolicy;
private IHttpClientFactory _httpClientFactory;

public Step4Model( 
   IReadOnlyPolicyRegistry<string> policyRegistry, 
   IHttpClientFactory httpClientFactory )
{
    this._httpClientFactory = httpClientFactory;
    this._cachePolicy = policyRegistry.Get<IAsyncPolicy>( 
                                                     "defaultPolicy" );
}
Enter fullscreen mode Exit fullscreen mode

Step 3. Call the Polly policy from your methods

Lastly, we can use the cache policy in our GetCurrencyData method:

public async Task<CoinCapData> GetCurrencyData( string id )
{
    return await this._cachePolicy.ExecuteAsync(
        async _ => await GetData(),
        new Context( $"{this.GetType().Name}.GetCurrencyData({id})" ) );

    async Task<CoinCapData> GetData()
    {
        var response = await httpClient.GetFromJsonAsync<CoinCapResponse>(
                               $"https://api.coincap.io/v2/rates/{id}" );

        return response!.Data;
    }
}
Enter fullscreen mode Exit fullscreen mode

If, later on, you want to add some resilience feature, you can do it with editing only the initialization code. In the next code snippet, we are adding a retry policy to the caching policy.

builder.Services.AddSingleton<IAsyncCacheProvider, MemoryCacheProvider>();

builder.Services.AddSingleton<IReadOnlyPolicyRegistry<string>, PolicyRegistry>(
    serviceProvider =>
    {
        var cachingPolicy = Policy.CacheAsync(
            serviceProvider.GetRequiredService<IAsyncCacheProvider>(),
            TimeSpan.FromMinutes( 0.5 ) );

        var retryPolicy = Policy.Handle<Exception>().RetryAsync();

        var policy = Policy.WrapAsync( cachingPolicy, retryPolicy );

        var registry = new PolicyRegistry { ["defaultPolicy"] = policy };

        return registry;
    } );
Enter fullscreen mode Exit fullscreen mode

Alternative: EasyCaching

Another alternative to using MemoryCache directly in your code is EasyCaching. The main benefit of EasyCaching is that it provides implementations for several caching servers, making it easier for you to migrate to distributed caching if needed. It also handles the serialization and deserialization of .NET objects from/into binary or text data.

How To Avoid Boilerplate Code With Caching?

You may have noticed that the examples above require a considerable amount of repetitive code, as each method needs to be wrapped in a delegate call and generate the cache key.

This repetitive code can be eliminated by using Metalama, a code generation and validation toolkit for C#. The free edition will suffice if you just want to use it for caching.

There are two ways to implement caching using Metalama.

One is to write a code template (called aspect) manually. This is beyond the scope of this article, and you can check the caching example to learn more.

The second is to use the Metalama.Patterns.Caching.Aspects package that already implements the aspect.

The initial setup is straightforward and requires just two lines in our Program.cs:

builder.Services.AddMetalamaCaching();
Enter fullscreen mode Exit fullscreen mode

You then mark your method with a the [Cache] attribute:

[Cache( AbsoluteExpiration = 0.5 )]
public async Task<CoinCapData> GetCurrencyData( string id )
{
    var response = await httpClient.GetFromJsonAsync<CoinCapResponse>(
                         $"https://api.coincap.io/v2/rates/{id}" );

    return response!.Data;
}
Enter fullscreen mode Exit fullscreen mode

And that’s all! When building the project, Metalama generates all the necessary code without us having to modify anything in the method.

Like EasyCaching, Metalama.Patterns.Caching is agnostic of the caching provider. It also handles serialization and will fail your build if you are trying to cache something uncacheable – for instance, an enumerator. Unlike other solutions, it handles the hassle of generating cache keys.

Summary

MemoryCache is the C# implementation of an in-memory cache. You can use it to improve your application performance and reduce its resource consumption at the cost of an increased memory usage. We discussed situations where using caching is not suitable – either because of the nature of the operation or the topology of your deployment.

Although you can use MemoryCache directly in your code, this might not be the best option if you want to keep your source code robust and maintainable. Using an intermediate layer like Polly, EasyCaching, or Metalama Caching will make it easier to evolve your code in the future. Additionally, Metalama significantly reduces the amount of code you need to write, as it boils down to a single custom attribute.

This article was first published on a https://blog.postsharp.net under the title MemoryCache in C#: A Practical Guide.

Top comments (0)