In this series, we have looked at the "musts" (databases) and "shoulds" (asynchronous jobs, websockets) of a web application. Now we turn to one of the "coulds" (that is nonetheless recommended for scaling businesses): caching. In particular, we mean caching HTML fragments and other snippets of data, as referred to in Rails Guides. We are not concerned with HTTP or SQL query caching.
In this part, we'll see how to speed up our Rails app using LiteCache. But first, we'll touch on Russian doll caching and how it comes in handy.
Russian Doll Caching in Ruby on Rails
At first glance, it might seem counterintuitive to employ the same technology for the main database as well as the cache. After all, what speed improvements will that engender? This is where a technique called "Russian doll caching" comes in, which we will briefly shine a light on now.
In one sentence, the gist of this caching approach boils down to:
The fastest database query is no query.
What do I mean by that? Let's turn to the Rails Guides definition of fragment caching first:
Fragment Caching allows a fragment of view logic to be wrapped in a cache block and served out of the cache store when the next request comes in.
In other words, when you wrap a piece of view code in a cache
helper, the rendered HTML fragment is put into the cache store, with a unique, expirable cache key. This key is constructed to expire whenever either the entities of which the cache key is composed, or the underlying view template, change.
An Example
That may sound very unwieldy. Let's look at an example in the context of our app:
<!-- app/views/prediction/_prediction.html.erb -->
+ <% cache prediction do %>
<div id="<%= dom_id(prediction) %>">
<%= turbo_stream_from prediction %>
<% if prediction.prediction_image.present? %>
<%= image_tag prediction.data_url %>
<% else %>
<sl-spinner style="font-size: 8rem;"></sl-spinner>
<% end %>
</div>
+ <% end %>
Here, we have wrapped our _prediction
partial in a cache
block. This generates a cache key in the fashion of:
view path template digest identifier
| | |
views/predictions/_prediction:9695008de61cf58325bbf974443f54bc/predictions/3
As we can see, this key comprises:
- The view path, containing the
cache
block. - A digest of the template (i.e., if you change the partial, the cache entry will be invalidated).
- A unique identifier — in other words, our
prediction
— see the ActiveRecord documentation.
Stored with this key is a timestamp (the updated_at
column of our prediction
) and the HTML fragment, in our case also the complete image's data URL. Whenever this partial is rendered again, and a matching cache key is found, rendering is bypassed. Instead, the stored HTML is returned.
Making Use of Russian Doll Caching
What's that "Russian doll" piece about, though? To answer this, let's jump a layer higher into a view that renders this _prediction
partial:
<!-- app/views/prompts/_prompt.html.erb -->
+ <% cache prompt do %>
<sl-card class="card-header card-prompt" id="<%= dom_id prompt %>">
<div slot="header">
<!-- header content omitted -->
</div>
<%= turbo_stream_from :predictions %>
<div class="grid">
<div>
<%= image_tag prompt.data_url %>
</div>
<div id="<%= dom_id(prompt, :predictions) %>">
<%= render prompt.predictions %>
</div>
</div>
<div slot="footer">
<%= prompt.description.presence || "No Description" %>
</div>
</sl-card>
+ <% end %>
We can apply the same technique to the _prompt
partial, which, in turn, renders a _prediction
partial for every prediction associated with it. This will result in a single HTML fragment comprising all the child fragments. We just saved one SQL query for each prediction!
There's a catch, though: When the list of predictions associated with a prompt changes (for example, a new one is added), the top fragment doesn't know about this and will serve stale content (without the newly added image).
In other words, we have to expire its cache key and construct a fresh fragment. This is where the timestamp stored with each cache entry comes in handy. We can invalidate the cache simply by updating this timestamp. In ActiveRecord, luckily, there's a shorthand for this:
# app/models/prediction.rb
class Prediction < ApplicationRecord
# callbacks omitted
- belongs_to :prompt
+ belongs_to :prompt, touch: true
# methods omitted
end
Using the touch
flag on the belongs_to
association, every time a child record (a prediction) is updated, deleted, or added, the updated_at
timestamp of the parent (a prompt) is refreshed. This marks the cache fragment as stale, and it is reconstructed upon the next render cycle.
The term "Russian doll caching" in this context refers to the fact that all the valid child fragments can still be pulled from the cache, thus speeding up the rendering process.
Now that we have reviewed how fragment caching works and what benefits it yields, let's discuss how to enable and configure LiteCache.
LiteCache Configuration in Rails
In config/environments/development.rb
, add this configuration snippet:
config.cache_store = :litecache, {
path: Litesupport.root.join("cache.sqlite3")
}
This will resolve the root database path depending on your environment (in this case, db/development
) and create a cache.sqlite3
file there. There are more configuration options worth considering, though:
-
size
: the total allowed size of the cache database (the default being 128MB) -
expiry
: cache record expiry in days (the default is one month) -
mmap_size
: how large a portion of the database to hold in memory (default: 128MB) -
min_size
: the minimum size of the database's journal (default: 32KB) -
sleep_interval
: duration of sleep between cleanup runs (default: one second) -
metrics
: boolean flag to indicate whether to gather metrics
The ones you will likely want to tweak to your liking are size
, expiry
, and (potentially) sleep_interval
.
Note: To enable caching in development, you have to run:
$ bin/rails dev:cache
Although LiteCache has many benefits, it comes with some drawbacks too. Let's first look at some optimizations, and then a few of LiteCache's limitations.
Optimizations and Limitations of LiteCache for Your Ruby App
LiteCache connects to the SQLite database in a way that's optimized for use as a cache store. First, it's important to reiterate that LiteCache is configured to use a separate cache database, so all these optimizations only affect an isolated environment.
With this disclaimer in place, let's look at how LiteCache optimally utilizes SQLite.
First, LiteCache sets the pragma statement synchronous
to 0, so there is no sync after a commit to the database. This results in a tremendous speedup at the expense of data safety. In very rare cases, such as a power loss or operating system crashes, data loss might occur. However, considering that cache entries are seen as ephemeral in most cases, this is a sensible tradeoff. Needless to say, you can also override this setting in your configuration.
LiteCache also uses a least recently used (LRU) eviction policy with a special index, but it delays updating it. Instead, it will buffer the updates in memory, and flush them as a single transaction every few seconds.
What about limitations? At the time of writing this article, one crucial piece of the ActiveSupport::Cache
interface isn't implemented yet: The ability to do multiple reads, writes, and deletes against the cache database. Why is that so important? Because other cache backends like Redis demonstrate how significant speedups can be achieved by batching these operations. Indeed, the implicit performance gain built into rendering a collection of partials is the most impressive feat of Rails fragment caching.
Speaking of performance speed, let's now quickly look at some benchmarks before we wrap up.
Benchmarks: LiteCache Vs. Redis
The benchmarks for LiteCache are impressive, with a small caveat. While LiteCache outperforms a local Redis installation for every read operation, it seems like there's still room for improvement, especially for large write payloads.
Considering the increased benefits of caching large HTML fragments, this is a worthwhile limitation that will hopefully be tackled in the future.
Up Next: Built-In Full-Text Search with LiteSearch
In this post, we illuminated Russian doll caching as a technique to speed up Rails applications by avoiding unnecessary database calls.
Through practical examples, we’ve seen how nested cache fragments operate in harmony — each layer independent, yet interconnected — thus ensuring efficient rendering.
We also delved into the practicalities of configuring LiteCache to your liking and looked at important built-in optimizations. With that in mind, the missing support for multiple read and write operations is bearable, and perhaps is a feature that might soon arrive on the horizon.
In the next post of this series, we will take a tour through the latest addition to LiteStack: A SQLite-based full-text search engine called LiteSearch.
Until then, happy coding!
P.S. If you'd like to read Ruby Magic posts as soon as they get off the press, subscribe to our Ruby Magic newsletter and never miss a single post!
Top comments (0)