DEV Community

Ritikesh
Ritikesh

Posted on • Originally published at Medium on

Why just cache when you can memoize(with expiration and guaranteed consistency)

Why just cache when you can memoize(with expiration and consistency)

Google images, WP rocket

Memoization is a specific type of caching that is used as a software optimisation technique.

Caching is a commonly used software optimisation technique and is employed in all forms of software development, be it web or mobile or even desktop. A cache stores the results of an operation for later use. For example, your web browser will most likely use a cache to load this blog faster if you visit it again in the future.

So, when I talk about memoization, I am talking about remembering or caching a complex operations’ output in-memory. Memoization finds its root word in “memorandum”, which means “to be remembered.”

While caching is powerful, it usually is another process running on some other server bound by the network calls. Cache systems are invariably fast but network calls add bottlenecks to the overall response times. Add multiple processes making simultaneous calls over the same network — in a closed vpc setup — and the cache would need to scale as your components to keep up. Memoization has an advantage in this aspect where the data is cached in-memory, thereby avoiding the network latencies.

The most powerful aspects of preferring to use cache are:

  1. ttl (time to live) — cache data automatically expiring after a pre-specified time interval

  2. The data is always same when read from different processes — multiple app servers or background processes is a norm in today’s cloud-first architectures.

This allows the cache to be fresh — frequently invalidated and refreshed because of the ttl — and consistent — because it’s a single source of truth. However, the same is not true for memoization and you would barely find memoization, multi-process consistency and expiration used together.

In this blog however, you’ll see how and when to wield these simple but powerful techniques together, to optimise your own programs and make them run much faster in some cases.

Introducing memoize_until. A powerful yet simple memoization technique that focusses on the dynamic nature and consistency of all caching systems in a multi-process environment and brings that to the memoization world.

MemoizeUntil memoizes(remembers) values until the beginning of a predetermined time metric — this can be minute, hour, day and even a week. The store upon expiry auto-purges previous data — to avoid memory bloat — and refreshes the data by requesting the origin. Since the process auto-fetches data at the beginning of the pre-defined time metric, it is guaranteed to be consistent across processes.

To begin with, simply, install the package via npm:

npm install memoize_until
Enter fullscreen mode Exit fullscreen mode

Then require the module and initialise it with your use-cases and use it where required.

const MemoizeUntil = require('memoize_until').MemoizeUntil

MemoizeUntil.init({ 
 day: ['custom1', 'custom2']
})

MemoizeUntil.fetch('min', 'default', () => { 
 return 'SomeComplexOperation'; 
})
Enter fullscreen mode Exit fullscreen mode

For a simple example, let’s consider your production-ready app has a public facing API and you want to implement a FUP(fair usage policy) and hence set appropriate rate limiting. But you could almost foresee some of your customers complaining and wanting an increased API limit every now and then. This requires your API limit to be dynamic.

Traditionally, developers would save this as a configuration in the configuration database and load it once per request. But over time, such configurations have moved on to be retained in cache stores like redis which are traditionally very fast but the network latencies remain. To avoid cache calls for every web request, you would want to memoize the API limit locally and make use of it for every request but also frequently check the cache store if it has been updated. This seems like a perfect use-case for using memoize_until. The cached data needs refreshing, but not instantly. Sample usage can be found in this gist:

The readme covers extra documentation like how to extend memoize_until for truly dynamic behaviours — dynamic keys and values — and more.

Note: memoize_until is not a replacement for a cache store, it’s merely an optimisation technique to reduce network calls to your cache store or database through memoization by guaranteeing consistency. Since everything is stored in-memory, memory constraints on the remote servers also needs to be considered — although, thanks to the cloud, this isn’t as big a concern as it once used to be.

Top comments (0)